Star Trek: The Next Generation

"The Measure of a Man"

****

Air date: 2/13/1989
Written by Melinda M. Snodgrass
Directed by Robert Scheerer

Review by Jamahl Epsicokhan

In TNG's first bona fide classic, the nature of Data's existence becomes a fascinating philosophical debate and a basis for a crucial legal argument and Federation precedent. Commander Bruce Maddox (Brian Brophy), on behalf of Starfleet, orders Data to be reassigned and dismantled for scientific research in the hopes of finding a way to manufacture more androids with his physical and mental abilities. When Data says he would rather resign from Starfleet, Maddox insists that Data has no rights and takes it up with the region's newly created JAG office, headed by Capain Philipa Louvois (Amanda McBroom), who serves as judge. Picard takes on the role of Data's defender.

This episode plays like a rebuke to "The Schizoid Man," taking the themes that were intriguing in that episode and expanding upon them to much better effect. What rights does Data have under the law, and is that the same as what's morally right to grant him as a sentient machine? Of course, one of Maddox's arguments is that Data doesn't have sentience, but merely the appearance of such. The episode cleverly pits Riker against Picard; because the new JAG office has no staff yet, the role of prosecution is forced upon the first officer. Riker finds himself arguing a case he doesn't even believe in — but nevertheless ends up arguing it very well, including with a devastating theatrical courtroom maneuver where he turns Data off on the stand.

Picard's rebuttal is classic TNG ideology as put in a courtroom setting. The concept of manufacturing a race of artificial but sentient people has disturbing possibilities — "an entire generation of disposable people," as Guinan puts it. Picard's demand of an answer from Maddox, "What is he?" strips the situation down to its bare basics, and Picard answers Starfleet's mantra of seeking out new life by suggesting Data as the perfect example: "THERE IT SITS." Great stuff.

Still, what I perhaps love most about this episode is the way Data initially reacts to being told he has no rights. He takes what would for any man be a reason for outrage and instead approaches the situation purely with logic. He has strong opinions on the matter, but he doesn't get upset, because that's outside the scope of his ability to react. His reaction is based solely on the logical argument for his self-protection and his uniqueness. And at the end, after he has won, he holds no ill will toward Maddox. Indeed, he can sort of see where Maddox is coming from.

Trivia footnote: This is also the first episode of TNG to feature the poker game.

Previous episode: A Matter of Honor
Next episode: The Dauphin

◄ Season Index

107 comments on this review

Tres
Fri, Apr 17, 2009, 10:21pm (UTC -5)
just wanted to agree with your review of "Measure of a Man" and wanted to add, the final secne, where Riker and Data speak in the conference room, when Data says, "you're actions injured you to save me. I will not forget that." Chokes me up every time. Wonderful writing.
Damien
Wed, Apr 22, 2009, 10:47am (UTC -5)
"Measure of a Man" - absolutely a classic and one of my favourites, possibly favourite.

I only have one small quibble (well, it's a biggie, but not so that it detracts from the overall ep).

It just didn't seem realistic that a case of such huge potential importance would be prosecuted and defended by two people that have no legal training, have never tried a case and are friends and colleagues serving on the same ship! It beggars belief that such a trial could take place, especially given its importance.

Surely the logical thing to do would have been to delay the trial until such time that trained lawyers could be gathered and a legally binding decision could be made, rather than leaving the decision open to appeal/overrule in the future on the grounds of improper procedure.
Trajan
Tue, Mar 30, 2010, 3:49pm (UTC -5)
'The Measure of a Man'. Ugh! I'm sorry, I hate it. I have no problem with the theme but as a lawyer, I really, really hate it. It's a bit like the entire Doctor Who series 'Trial of a Timelord' where the greatest and oldest civilisation in the galaxy apparently has a judicial system that bears no relation to any reasonable concept of 'justice'. No JAG officers so you must prosecute? If you don't he's 'a toaster'? No, if you insist on making that ruling I'll ensure that I have your head on a plate by the end of the day and you'll never practice law in the Alpha Quadrant again. As for turning Data off; it was his rights as a sentient being that were for the court to decide. Allowing him to be turned off constitutes assault, battery, actual bodily harm and possibly attempted murder if he had no reset button. And this is allowed in A COURTROOM?

Sorry. No stars from me. Actually, can I award negative stars??
J.B. Nicholson-Owens
Sat, Jul 17, 2010, 1:20am (UTC -5)
More about "Measure of a Man": In addition to Trajan's objections, I'll also add that the episode strikes me as a huge dodge of the issue they set themselves up to decide.

One wonders how Data got to serve at all if his entrance committee only had one objection -- Cmdr. Maddox's objection -- and that committee based their decision on sentience like Data said.

But there's another problem: the slavery argument (should we or should we not let Maddox make a race of copies of Cmdr. Data?). The slavery argument only works if you already agree with what Capt. Picard had to argue. The slavery argument fails if already agree with what Cmdr. Riker had to argue (Data is property, he can no more object to work than the Enterprise computer can object to a refit). It seems to me that the slavery argument presupposes the very thing the hearing is meant to decide and therefore this argument has no place in this hearing.

And Cmdr. Louvois' finding essentially passes the buck (as she all but says at the end of the hearing): she has no good reason to find as she does but she apparently believes erring on the side of giving Data "choice" is a safer route.

I think this episode might merit as the most overrated TNG episode.
Dave Nielsen
Mon, Aug 8, 2011, 10:58pm (UTC -5)
"The Measure of a Man." I too loved this episode, but I can't help wondering how this question wasn't already decided years earlier. It seems to me Data's status would have had to be decided before he could join Starfleet, or at least before he could be given a commission. Then I partly agree with Maddox that Data could just be simulating sentience. With a sufficiently sophisticated computer there would be no way to tell. I guess the point is that there's no way to tell with anyone, but then there's a difference between the "programming" of biological life and that of an articial, constructed life form. It's also a bit cheeseball that they would have no staff just so that some of the principal actors won't just be getting paid to sit in the background. I wonder too if it was necessary for the Philippa to have been an old flame of Picard's. Still, with all that I still love and it still stands as one of TNG's best episodes.
Dave Nielsen
Mon, Aug 8, 2011, 11:16pm (UTC -5)
Trajan: "As for turning Data off; it was his rights as a sentient being that were for the court to decide. Allowing him to be turned off constitutes assault, battery, actual bodily harm and possibly attempted murder if he had no reset button."

Since Data's sentience was the question here, Riker couldn't be charged with anything for turning Data off so that would be perfectly fine to do in a courtroom. Even after the ruling, it wouldn't have mattered - only if he did it again. If Maddox's arguments had been upheld, and Data was property, who could be dissected against his will, he can't have the rights of a sentient being.
Trajan
Wed, Jan 25, 2012, 3:08pm (UTC -5)
Dave Neilsen: Since Data's sentience was the question here, Riker couldn't be charged with anything for turning Data off so that would be perfectly fine to do in a courtroom.

I disagree. You could 'turn off' an alleged human being with a baseball bat but it would produce no more evidence of his sentience than Data's off switch does of his.
X
Sun, Mar 25, 2012, 7:03am (UTC -5)
Trajan: You could 'turn off' an alleged human being with a baseball bat but...

No. You could not 'turn off' a human being with a baseball bat in the same way that you can turn off a machine. You can either turn off a conscious part of a man's brain (brain and the entire organism is still functioning) or kill him. You cannot turn off a human completely, as you can turn off a machine, an then turn him on again.
Trajan
Tue, Apr 10, 2012, 3:32pm (UTC -5)
X: You cannot turn off a human completely, as you can turn off a machine, and then turn him on again.

Sure you can. Just don't be so enthusiastic with the baseball bat and knock him unconscious with it. (Which, in my courtroom, will still get you locked up for grievous bodily harm...)
Patrick
Mon, Aug 27, 2012, 9:28pm (UTC -5)
The Original Star Trek had a second pilot and in some ways, TNG did too--"The Measure of a Man". This was the watershed turning point of the series it's thoughtful story was uniquely it's own and not another riff on Classic Trek. The actors were truly becoming comfortable in their character's skin; call backs to the series own mythology from Tasha Yar's intimacy with Data, to a mention of Lore made the fictional universe of TNG more real. Secondary characters like O'Brien and Guinan were weaving their way through the mythos. And last but not least: THE FIRST POKER GAME--a brilliant edition to the series that provided some of the best characters moments and the classic final scene of the series.

"The Measure of a Man" and "Q Who" were an effective one-two punch that made the show the one we know and love.
Peremensoe
Mon, Sep 3, 2012, 7:11pm (UTC -5)
"I partly agree with Maddox that Data could just be simulating sentience. With a sufficiently sophisticated computer there would be no way to tell. I guess the point is that there's no way to tell with anyone, but then there's a difference between the 'programming' of biological life and that of an articial, constructed life form."

Is there?

It's a somewhat deeper question than the episode really addresses, but... what *is* sentience?

Is it physically contained in the actual electrical and chemical processes of neurons?

Or is it the *product* of a certain complexity of such processes?

If the latter, then not only is there no fundamental difference between biological and synthetic processors giving rise to the sentient function--but there is also no such thing as 'simulated' sentience. If the complexity is there, it's there.
xaaos
Tue, Nov 13, 2012, 12:37pm (UTC -5)
Data is the best!!! Loved the final scene between him and Riker.
ReptilianSamurai
Fri, Nov 30, 2012, 10:53am (UTC -5)
Just saw the new extended, remastered version of the episode the other night, and it was absolutely fantastic. It really gives the story a bit of room to breathe, and better develops the guest characters (especially Philipa's backstory with Picard) as well as really exploring Data's dilemma and the nature of being sentient. This version of the episode, in my opinion, is one of the best in all of Trek and I'm really glad they were able to give us this extended cut.

Hope Jammer reviews the extended version at some point, I'd be interested to hear his take on how it changes the episode.
Rikko
Sun, Feb 24, 2013, 9:52am (UTC -5)
What a wonderful episode!

@ Trajan: I don't want to beat a dead horse, but I think you're being too hard on this ep for something it isn't. TNG is not trying to be 'law and order in space'. It's always about the bigger questions.

I can suspend my disbelief with stories like this, specially when I compare 'The measure' to total fantasy wrecks like the black pond of tar of 'skin of evil' or the many energy life -form from countless episodes.

Still, I wont deny that the lack of crew for a trial of this gravity was hilarious. The production staff must have been in dire straits during this season.
Shawn Davis
Fri, Mar 8, 2013, 7:00am (UTC -5)
Greetings to all. I love this episode. One of TNG's classic and features one of my favorite character, Data, in an most interesting position ever.

I have one question though, Riker as Data to bend a metal bar in an attempt to disprove that he is not sentient and Picard object to that by stating that there are many live alien species that are strong enough to do that, Capain Philipa disagreed with him and told Riker to continue with the demonstration. My question is why is Picard wrong? I though what he said about some aliens being strong enough to bend the metal bar along with robots and androids like Data was logical to me.

Thanks.
PeteTongLaw
Fri, Mar 8, 2013, 4:09pm (UTC -5)
It seems to me that the space station and the Enterprise-D are not appropriately scaled.
William B
Wed, Mar 27, 2013, 3:35am (UTC -5)
I do like this episode, perhaps even love it, but I admit that I do find it hard to suspend my disbelief in portions of it related to the legal proceedings.

It does seem, as others have mentioned above, as if Starfleet should have settled this issue before; but on some level it does make sense that maybe they didn't, because Data's status is so unique.

That said, I do think the idea that Data would be Starfleet property because he went through the Academy and joined Starfleet is disturbing, because Data is only in Starfleet because he chose to do so. The Enterprise computer never *chose* to be part of Starfleet.

I suppose one resolution to this would be that since Data was found by Starfleet personnel (when he was recovered from Omicron Theta), at that point he 'should have' entered into Starfleet custody as property. It would also make sense if the reason that Data's status as having rights/not having rights was not extensively discussed (e.g. whether Data constitutes a Federation citizen) was that he spent all his time from his discovery on Omicron Theta to his entrance into the Academy with Starfleet personnel in some capacity or another, so that there was never a time in which he would need official Federation citizenship.

On some level it does make sense to me that Data would hang around the Trieste (I think it was?) after they discovered him until eventually a Starfleet officer there sponsored his entry into the academy.

I suppose that if Data had no sentience all along, and had a mere facsimile of it -- if Data genuinely WAS an object and not a person -- perhaps he would go to Starfleet ownership merely for the fact that Data was salvaged by a Starfleet vessel after the destruction of Omicron Theta, and since there are no living "rightful owners" with the colony destroyed (and Soong and his wife for that matter thought dead) it makes sense that Starfleet could claim something like salvage rights.



Re: the point raised by J.B. Nicholson-Owens, it is true that IF Data is property, then so would a race of mechanical beings created in Data's image. It does not actually affect the case directly.

However, I do not think this is a flaw. Picard makes the point that one Data is a curiosity, but a fleet of Datas would constitute a race. Perhaps that was a leading phrase -- but instead we should say that a fleet of Datas would constitute a much larger set. The main purpose of this argument is, I think, to demonstrate that the consequences extend far beyond Data himself.

Put it this way: if there is a 99% chance that Data is property and a 1% chance that he is a sentient being with his own sets of rights, then taking Data by himself, there is a 1% chance that a single life will be unfairly oppressed. But if there are thousands and thousands and thousands of Datas in the future, that becomes a 1% chance that thousands and thousands of beings will be oppressed. That is simply a much bigger scale and a much bigger potential for tragedy. If Luvois ruled that Data were property and he were destroyed but was the only creature destroyed, it would be tragic, but still only a single being. If Luvois ruled that Data was property and thousands of androids were produced and Luvois was wrong, then _based on that ruling_ a whole race would be condemned to servitude. The possible cost to her decision is much greater, and the importance of erring on the side of giving Data rights becomes greater as a result as well.
N.I.L.E.S.
Sat, Apr 13, 2013, 4:50am (UTC -5)
Has anyone considered that the basic premise of this episode is unnecessary based on the shows own rules. The premise is that Data needs to be dismantled so that more androids like him can be created but Data is dismantled every time he uses the transporter. Since the enterprise computer is able to dismantle Data and reassemble him it must have detailed information about his construction. Surely all Maddox needs to do is access the information stored in the transporter logs and he would have all the information he needs to replicate Data.
The above point aside I really love the episode and the questions it raises about the point when a machine becomes conscious. I agree with those that have stated that his issue would have been settled before Data entered Starfleet, especially since the sole bases for Maddox objecting to Data's entrance into Starfleet was because he did not believe that Data was sentient. The fact that the others on the committee allowed Data to enter Starfleet anyway suggest that they believed he was sentient.
I also agree that there were some aspects of the court scenes that were not as convincing as they could have been. For instance, since the issue to be decided is whether or not Data is sentient I find it odd that no psychologist were asked to testify since consciousness is part of what psychologist study. I also find it odd that there were no cross exam when a witness testified. For example, when Data was on the stand Picard asked him what logically purpose several items Data had packed served. In reference to his medals Data replied that he did not know he just wanted them and in reference to a picture of Tasha Data replied that she was special to him because they were intimate. Clearly Picard was trying to imply that Data had an emotional connection to the things he had packed much as humans do. Riker could have easily undermined that premise on cross exam by asking, "When you say you wanted these medals do you mean you felt a strong desire to take them with you?" Data would have had to have answered no because by his own admission he does not feel anything. This would have reminded the audience that Data is a machine.
Grumpy
Sat, Apr 13, 2013, 4:51pm (UTC -5)
"...Data is dismantled every time he uses the transporter."

If you're suggesting that Data could be replicated like any other hardware, a fair point. Presumably something in his positronic brain is akin to lifeforms, which can be transported at "quantum resolution" but not replicated at "molecular resolution." But the issue was never addressed in the series.

Also, apparently Data's positronic brain is an "approved magnetic containment device," which the tech manual says is the only way to transport antimatter without "extensive modifications to the pattern buffer."
istok
Fri, May 10, 2013, 7:24pm (UTC -5)
This is the best TNG episode I've seen so far. Admittedly, I've only seen season 1 and half of 2. Nonetheless it is very compelling and it suspended my disbelief just fine. I don't care to dissect mainstream scifi television in great detail. Something will inevitably fail to add up. But overall, uncharacteristically for the said mainstream television, this episode actually raised some deep issues, and it was done well, in its own context. It actually got me thinking, what is life? No, really? Seems very easy but I have no more concrete answers to that, than I do to the questions, "what is the universe", or, "what is the earth's core really like".
All in all, this was good television.
Frank Wallace
Mon, Jul 8, 2013, 9:54pm (UTC -5)
Wonderful episode.

I never saw any reason to question the legal elements of the episode. For one, Starfleet officers are multi purpose types, given that the Federation doesn't have "police" or "armies" in the truest sense. Secondly, The reason for Picard being involved is explained early, and the other captain is a JAG member.

Lastly, the person that wrote the episode has actually trained and practiced law as a career for several years. She will know enough about it to make it believable, and it DID seem believable. Plus, it's the idea behind the episode that matters. :)
Sam S.
Sat, Aug 3, 2013, 11:36pm (UTC -5)
I just wanted to add that this episode provides the term toaster for artificial life. This apparently is where Battlestar Galactica reboot gets the concept for its artificial lifeforms.
SkepticalMI
Thu, Sep 19, 2013, 9:48pm (UTC -5)
This was basically an all or nothing episode. A concept like this could either succeed magnificently in raising philosophical points or fail miserably in cliches and preachiness. Thankfully, it hit the former far more often than the latter.

Yes, the courtroom scenes were hardly very legally precise (but heck, lawyer based TV shows aren't very legally precise either). Unfortunately, I don't think either Riker or Picard did a very good job. Maybe that was due to the fact that it had to be short to fit in the episode. Of course, they could have cut out some of the Picard/JAG romance backstory for a better courtroom drama.

But it probably would feel incomplete no matter how long they took. In reality, it would probably be a very lengthy trial, so no showing in a 43 minute TV show could fully expand whether or not he's sentient.

And frankly, it isn't necessary. We already know the arguments already. It really does boil down to a few simple facts: On the negative side, he was very clearly built and programmed by a person. On the positive side, he very clearly acts like he's sentient. And frankly, we don't know.

And that's probably what makes this episode work. They acknowledge and reinforce that. Picard's realization (actually Guinan's realization) to make the argument but avoid defining the scope in favor of the bigger picture was pitch perfect. This is a simple backwater JAG office. Should it really be deciding the fate of a potential race? Picard made that point beautifully in the best speech he's had so far. And it was that speech, that implication, that resonated.

The point was not to decide whether or not Data was sentient, but to consider the consequences. And to err on the side of caution.

Of course, in the real world, Maddox would undoubtedly appeal to a higher court, and this would make its way to the Federation equivalent of the supreme court. But you know what? I'm glad it ended here. Another good aspect of this story was that, despite going full tilt towards making Maddox the Villain with a capital V, he seemed to get Picard's point as well. I'd like to think that Maddox does have a conscience and was willing to stop his pursuit based on even the chance that Data is sentient.

This episode seemed to skirt the edge of being melodramatic, preachy, and cheesy, but always managed to avoid falling into it. Most importantly of all, it hit exactly the right tone on the fundamental question. There's a few nagging doubts in terms of the plotting and the in-universe rationale for all of this (which others have pointed out). I think that keeps it from being elevated too highly, but it's still the best episode of the series so far.
Latex Zebra
Sat, Sep 21, 2013, 6:45pm (UTC -5)
This might actually be the best episode of any Trek series.
Nick P.
Mon, Sep 30, 2013, 4:11pm (UTC -5)
OK, first amazing episode! One of the best of the series...However, I am not sure that I agree with the central theme, that it is wrong for starfleet to create a race of slaves. The enterprise is as sopshistacated as data, and has already been able to create sentience (elementary, dear data), and there is a fleet of them, further, data numerous times saves the ship, why is it wrong to want to mass produce him for starfleet needs?
K'Elvis
Thu, Oct 24, 2013, 10:26am (UTC -5)
Sure, you had to suspend disbelief, but this was one of my favorite episodes of TNG. This wouldn't have been resolved on some Starbase, but by properly trained legal officials in a proper court.

This should have been resolved already, Starfleet had already accepted Data as a person by allowing him to enter the academy and commissioning him. Data's ability to bend a bar is not evidence that he is a thing. As counter evidence, Picard could have brought in a bar of his own, and showed that some members of his crew were strong enough to bend it, while others were not.

To counter the off-switch argument that Riker made, one need only have someone perform the Vulcan nerve pinch, which effectively turns a humanoid off.

If Data had been declared to be property, that wouldn't mean that he was Starfleet's property. Starfleet didn't make him, if he was anyone's property, he would be Dr. Soong's property.

Still, this is an episode well worth suspending disbelief, because the ideas are so profound.
Cammie
Wed, Dec 4, 2013, 5:36pm (UTC -5)
I don't think Riker would have liked it if Data did a Vulcan Nerve Pinch to turn him off.
Cammie
Thu, Dec 5, 2013, 9:29pm (UTC -5)
I love any Star Trek episode with Q,Data,or Spock in it.I think they are the highlight of the show.
Jons
Mon, Feb 17, 2014, 2:53pm (UTC -5)
There is no "we don't know" about him being sentient - the very fact that he spontaneously says (and insists) he's sentient means he is.

And an argument which I think should have been pushed further: Organic life isn't any less a machine than Data. The only difference is that it's a self-replicating machine. Animals (humans included) are organic machines whose building and functioning is determined by dna sequences (GACT instead of 0 & 1).

As for the comparison with the ship's computer: As a matter of fact, not all organic life is sentient: We have somehow determined, for diverse reasons good or bad that non-sentient life isn't as respectable as sentient life. In that, the ship's computer isn't Starfleet's property any more than a dog belonging to Starfleet would be. Still, just as a dog isn't a human being, the ship's computer isn't a sentient android. The fact they're both non-organic has no bearing on this.

In any case, whether it's here or during the Doctor's trial in Voyager, I cannot even begin to understand the arguments of the "they're machines" side. Obviously as portrayed in Star Trek, they ARE sentient (whether we will one day be able to replicate a brain's complexity well enough that this would be possible is another matter entirely).
Yanks
Wed, Mar 5, 2014, 10:21am (UTC -5)
Where did my(our) discussion go?
Shannon
Mon, Aug 11, 2014, 5:00pm (UTC -5)
Totally agree! This is probably one of the best episodes of Star Trek across ALL of the series. And I love that it didn't involve phasers, torpedos, or silly looking aliens. This was a moral story about the rights granted to a sentient being of our own design. This is class Trek, with themes that stretch deep into our society... Patrick Stewart was amazing in this episode, with his oh so controlled passion when he was arguing Data's case. "Your honor, the court room is a crucible, and when we burn away irrelevancies, we are left with a pure product, the truth for all time." Great stuff! I only wish I could give it 5 stars, because this was an amazing story!
Yanks
Thu, Aug 14, 2014, 5:10pm (UTC -5)
I'll also add that the episode strikes me as a huge dodge of the issue they set themselves up to decide.

One wonders how Data got to serve at all if his entrance committee only had one objection -- Cmdr. Maddox's objection -- and that committee based their decision on sentience like Data said.

But there's another problem: the slavery argument (should we or should we not let Maddox make a race of copies of Cmdr. Data?). The slavery argument only works if you already agree with what Capt. Picard had to argue. The slavery argument fails if already agree with what Cmdr. Riker had to argue (Data is property, he can no more object to work than the Enterprise computer can object to a refit). It seems to me that the slavery argument presupposes the very thing the hearing is meant to decide and therefore this argument has no place in this hearing.

And Cmdr. Louvois' finding essentially passes the buck (as she all but says at the end of the hearing): she has no good reason to find as she does but she apparently believes erring on the side of giving Data "choice" is a safer route.

But you have to realize the only reason we got that conversation in 10-Forward was because Whoopie is black.

Here is the transcript:

"GUINAN: Do you mean his argument was that good?
PICARD: Riker's presentation was devastating. He almost convinced me.
GUINAN: You've got the harder argument. By his own admission, Data is a machine.
PICARD: That's true.
GUINAN: You're worried about what's going to happen to him?
PICARD: I've had to send people on far more dangerous missions.
GUINAN: Then this should work out fine. Maddox could get lucky and create a whole army of Datas, all very valuable.
PICARD: Oh, yes. No doubt.
GUINAN: He's proved his value to you.
PICARD: In ways that I cannot even begin to calculate.
GUINAN: And now he's about to be ruled the property of Starfleet. That should increase his value.
PICARD: In what way?
GUINAN: Well, consider that in the history of many worlds there have always been disposable creatures. They do the dirty work. They do the work that no one else wants to do because it's too difficult, or to hazardous. And an army of Datas, all disposable, you don't have to think about their welfare, you don't think about how they feel. Whole generations of disposable people.
PICARD: You're talking about slavery.
GUINAN: I think that's a little harsh.
PICARD: I don't think that's a little harsh. I think that's the truth. But that's a truth we have obscured behind a comfortable, easy euphemism. Property. But that's not the issue at all, is it?"

Nothing in this conversation has ANYTHING to do with proving Data's sentience.

What one could do with a technology or a thing should in no way have any bearing on this trial.

They should have been trying to prove Data was sentient because then he could be identified as something more than 'property', not that we they could make a bunch of him so it isn't right. If Data was proven not to have sentience, then why wouldn't Star Fleet want one on every bridge?

This is why this episode, in my view, receives more acclaim than it deserves.

It's nothing more that the liberal machine injecting slavery into a situation where it didn't exist because they wanted to make this episode "moral". It pales in comparison to Uhura's conversation with Lincoln:

"LINCOLN: What a charming negress. Oh, forgive me, my dear. I know in my time some used that term as a description of property.
UHURA: But why should I object to that term, sir? You see, in our century we've learned not to fear words.
KIRK: May I present our communications officer, Lieutenant Uhura.
LINCOLN: The foolishness of my century had me apologizing where no offense was given."

See in this exchange, Uhura responds how one would expect one to respond in the 23rd century, where Gene's vision is true. It doesn't faze her in the slightest, because it shouldn't. They bring a pertinent point up, but not in an accusatory way. In TNG, they inject something that happened 400 years ago in an attempt to justify something it doesn't relate to.

Why was Maddox a self-interested 'evil' white guy? Why did the epiphany for Picard come from a black Guinan? How does that epiphany relate to this case at all? Liberal hollywood. Poor writing.

Look at Picard's argument.

"PICARD: A single Data, and forgive me, Commander, is a curiosity. A wonder, even. But thousands of Datas. Isn't that becoming a race? And won't we be judged by how we treat that race? Now, tell me, Commander, what is Data?"

If Data isn't sentient, why is "it" different that a toaster? Because "it's" programed to talk? Do we regret making "races" of Starships? ... desk-top computers ... etc? The issue of "a race" is irrelevant, and only it and slavery are injected in his argument because Guinan was black.

Riker's argument WAS impressive, because he put the FACTS on display.

Picard's was not because he did nothing but put up a "feel bad" smokescreen that had nothing to do with proving whether or not Data, our beloved android, was indeed sentient or not.

So Picard put on a good dramatic show and Data won, which made us all and Riker feel good, but for all the wrong reasons. The Judge rules that Data was not a toaster, but why - because we might do something wrong with toasters in the future?

If they had proven Data was sentient (or some equivalent), then they could have addressed the whole cloning issue and that should be why Mattox can't "copy" Data, not because we might mistreat a machine in the future because we will make a bunch of them. But they didn't.
Josh
Thu, Aug 14, 2014, 9:26pm (UTC -5)
"But you have to realize the only reason we got that conversation in 10-Forward was because Whoopie is black."

Well, that is your interpretation, but I think it's fairly clear that Picard considers Data to be self-evidently sentient, yet was unable to argue this from a legal perspective adequately at that point in the episode. The essential argument of the episode - on my reading - is simply that Data is a self-aware being who is entitled to the presumption of sentience like anyone else, even though he is a machine. The corollary is that although Data cannot be "proven" to be sentient, there does not exist any test that can be prove it for anyone else either.

As for the slavery angle, Picard chooses that word to express his abhorrence at the idea of a race of sentient beings who might be "owned" and used like, to take your example, desktop computers. This doesn't have anything to do with any "liberal machine". It strikes me as a peculiarly Americentric reading to assume that this must have anything specifically to do with historical slavery in the Americas.
Elliott
Thu, Aug 14, 2014, 9:28pm (UTC -5)
@Yanks: Whoopie's race is relevant to the scene you described, but in a way that breaks the fourth wall, not "because she [Guinan] is black." If Guinan had been, say Beverly in this scene, the lines would read exactly the same and the truth of the statement would be no less, but the emotional *punch* wouldn't be quite so severe. It is purposefully uncomfortable for that brief moment she looks at the camera and we remember that these are actors with history, and out history with regards to how we treated other races, especially black people, has been mostly appalling. Again, the substance of the dialogue is what it is, but there's an extra layer to the scene because of Goldberg's race. It's in many ways what "Angel One" failed so miserably at.

Now, on to your other points:

" The slavery argument only works if you already agree with what Capt. Picard had to argue."

Well that's the point. If one hedges on the issue of Data's sentience, one can neatly hide behind the euphemism of property until the full implications of that process are pointed out by the slavery argument. You may not think Data is sentient--maybe he has a soul, maybe he hasn't (as Louvois said)--but if you're wrong, the issue is not the fate of one android, but the implications for how we treat an entire new form of life. Thus, the gravity of respecting this one individual android's sentience is enormous.

"Picard's was not because he did nothing but put up a "feel bad" smokescreen that had nothing to do with proving whether or not Data, our beloved android, was indeed sentient or not."

I'm kind of baffled by this: Picard asked Maddox what the qualifications for sentience were. He named them: intelligence, self-awareness and consciousness. Picard then went on to demonstrate how Data met those requirements, thus proving his sentience. The issues of race and slavery, as I said, have to do with the *implications* of the ruling, not winning the case. Picard's argument was that it wasn't merely a case of pitting the rights of Maddox against those of Data, but humanity's responsibility to the form of life of which Data is a vanguard.
Peremensoe
Fri, Aug 15, 2014, 6:46am (UTC -5)
Josh and Elliott are correct. Also, Guinan refers to "many worlds," so while we the audience recognize the significance of Whoopi's blackness *for us*, in-universe it is explicitly *not* about just black slavery on Earth, or "400 years ago."
Peremensoe
Fri, Aug 15, 2014, 6:52am (UTC -5)
Oh, and it's the TOS depiction of 'Lincoln' that was disingenuous and 'PC.' The real Lincoln assuredly did not consider black people to be humans of equal worth and dignity to himself.
Yanks
Fri, Aug 15, 2014, 8:38am (UTC -5)
Josh & Elliot,

Elliot, you said it yourself. The "implications" can't be used to make the decision here, so the argument is fluff. That angle should have been ruled inadmissible. (Read J.B. Nicholson-Owens' post above.) Take the entire slavery/race thing out and Picard's argument doesn't change at all. She doesn't even mention it in her ruling.

"PHILLIPA: It sits there looking at me, and I don't know what it is. This case has dealt with metaphysics, with questions best left to saints and philosophers. I'm neither competent nor qualified to answer those. I've got to make a ruling, to try to speak to the future. Is Data a machine? Yes. Is he the property of Starfleet? No. We have all been dancing around the basic issue. Does Data have a soul? I don't know that he has. I don't know that I have. But I have got to give him the freedom to explore that question himself. It is the ruling of this court that Lieutenant Commander Data has the freedom to choose."

I also don't know what "soul" has to do with anything. But that's neither here nor there I guess...

Her ruling is concerning Data and Data only. (as it should have been, that's all this trial was about)

Picard is beaten, goes down to 10-forward, talks with Guinan and comes away with the race/slavery argument. He didn't pop down and discuss this with Beverly. Like I said, this was just injected here as a "moral boost" to an episode that really didn't need it. How a society will treat more "Dats's" is irrelevant here, but most certainly would be a matter to be addressed later if Data's positronic brain can be replicated.

Picard's argument with Mattox anout sentience was all that was needed.

Another thing that gets me is how can Data be a commissioned Star Fleet Officer and not have the right any other officer has? I personally don't think this trial should have ever happened. No idea how he can have all the responsibility and no rights. the Enterprise's computer doesn't have any responsibility.

Let's end on a good note.

I personally think the best part of this episode is this:

"DATA: I formally refuse to undergo your procedure.
MADDOX: I will cancel that transfer order.
DATA: Thank you. And, Commander, continue your work. When you are ready, I will still be here. I find some of what you propose intriguing."

Data is one decision away from being dismantled and he reveals his decision not to participate was not a completely selfish one, he just was not convinced Mattox was competent. He is not opposed to research on him.

If Mattox was competent, would we have had this trial at all?

Intriguing.
Yanks
Fri, Aug 15, 2014, 8:42am (UTC -5)
Peremensoe,

"we the audience recognize the significance of Whoopi's blackness" That's the whole point. We don't get this irrelevant discussion unless Whoopie is in the conversation. It's Hollywood's way.

The "Real" version of Lincloln was not the point. Uhura's response was.
Elliott
Fri, Aug 15, 2014, 1:00pm (UTC -5)
@Yanks :

"The "implications" can't be used to make the decision here, so the argument is fluff. That angle should have been ruled inadmissible. (Read J.B. Nicholson-Owens' post above.) Take the entire slavery/race thing out and Picard's argument doesn't change at all. She doesn't even mention it in her ruling. "

If you're looking at this episode purely as a court case to decide Data's status (I grant you your point about his appointment to Starfleet, by the way), then I guess you can call the slavery arguments fluff, but if, like me, you're looking at it like a piece of drama, the ideas are integral to the story. The discussions are a window across time--can't you imagine similar discussions happening during the slave-trade on earth? A man, for example, who took a slave for a wife, extending to her rights and freedoms he denied to his other slaves because she was special to him? Don't we see in "Author, Author" how the narrowness of Louvois' ruling left the door open for further injustices to AI?

It's good to think critically, and not be myopic about issues like this and it was rewarding to see this kind of thinking on the screen.
Yanks
Fri, Aug 15, 2014, 1:10pm (UTC -5)
Elliot,

I might agree with you if there were 100's of Data's running around but there is not. This argument would be more applicable to the EMH in Voyager. Data is unique (positronic brain) so the implication is not needed nor applicable here. For this reason it is injected Hollywood drama for the sake of implied "moral justice". Nothing more.
Robert
Fri, Aug 15, 2014, 1:29pm (UTC -5)
@Yanks - So if there were only a single black man in the new world (because for some reason we'd only brought one over so far) we couldn't discuss the broad range implications of denying him rights and how that would affect what happens when we got a whole lot more of them? Wasn't the whole point of Maddox's research to figure out how to make 100s of Data's run around?
Yanks
Fri, Aug 15, 2014, 1:56pm (UTC -5)
Robert,

Not relevant to this trial. Jesus, even you guys try to inject race/slavery into this. Hollywood has you trained well. It's not about a discussion.

Like I said in my 1st post. You can't make a ruling on a “what if”? This hearing was on Data's right to choose, not "if there were 100's or 1000's of Data what would we do".
Robert
Fri, Aug 15, 2014, 2:27pm (UTC -5)
Yanks,

Not relevant and not central are two different things. I will agree that it is not the central argument. I think "no relevant" is stretching here. Maddox's goal was to brand Data as a creature with no rights and then make lots more. Guinan (rightfully) deemed that slavery. Picard (rightfully) pointed out during the trial that the Maddox intended to create a race of androids slaves and that he couldn't even determine (with certainty) that Data wasn't sentient by his own criteria. Yes, taking away an individual's rights to choose are troubling enough and was the core issue of this trial, but Maddox DID intend to create a race of superhuman sentient slaves. That was the point of the episode and addressing it isn't a hyper-liberal Hollywood poppycock conspiracy :).
msw188
Fri, Aug 15, 2014, 5:44pm (UTC -5)
Yanks,
I believe your opinion here is too idealistic. A trial such as this one is more than a logical investigation - it is an attempted interpretation of both the meaning and the purpose of law. Not all trials need to be this way; in some cases (most cases?) a trial should be about logically or reasonably uncovering truth. But in cases such as these, truth is not easily definable, and so the meaning and purpose of law become as important as the logical statements themselves.

Picard's arguments (grossly simplified - I'll agree that this episode is slightly overrated) can be viewed as:
1. No satisfactory definition for sentience exists that will allow for a logic-based ruling.
2. Judging non-sentience in error will have consequences akin to slavery, which undermines the meaning and purpose of Federation law.

He 'proves' 1 first (flimsy), and proceeds to claim 2. Given the time constraints of the episode, I think this is solid writing. Louvois' comments cement this purpose of the writers, in my opinion. She cannot hope to logically decide whether or not Data has sentience (or a 'soul', whatever she means by that), and so the only recourse is to "err on the side of caution."
Peremensoe
Fri, Aug 15, 2014, 7:56pm (UTC -5)
Yanks, a scene can have two meanings or purposes at the same time. The better the episode is, the more likely it will feature such scenes (or perhaps it's the other way round).
Yanks
Mon, Aug 18, 2014, 9:15am (UTC -5)
Robert / msw188,

I think you're basing your argument on Mattox's potential for success which by Data's own admission was slim at best; hence his refusal to participate. (and protexting his memories) I don't even think we know about Lore yet at this point in the series (could be wrong there).

And, if Data doesn't get the right to choose (loosely based on sentience) "he" IS no different than a toaster, or a computer driven assembly plant etc.

Funny how this very same issue was brought up many times before I chimed in and no one had any issues, but I bring up the truth about Guinan (watch the scene in 10F if you don't believe me, it's plain as day) and folks all of a sudden have objection.

The potential for a "race" only exists if Data wins. It had no place in this hearing aside from courtroom fluff.

I enjoy the discussion with you folks. I guess we’ll have to agree to disagree on this issue.

Peremensoe,

Sure ... or not. :-)
Robert
Mon, Aug 18, 2014, 9:26am (UTC -5)
By your own admission though, your issue with it is that it's fluff in the courtroom. But I think the whole point of THIS court case was just to make us think.

On the whole I chimed in because if anyone's argument was fluff and irrelevant it was Riker's and you didn't complain the same about that. I mean... one can detach and reattach your arm too (although perhaps not so easily), and with certain drugs I can turn you on and off as well.

His strength and processing capabilities are also irrelevant, I have a strength (though not so great) and processing speed (again, not so great) as well :P I assume you and I are sentient though!

And I totally agree with you about Guinan/Whoopi and the 4th wall breaking being the reason for the conversation. To me personally though, that doesn't detract from the episode (and perhaps improves it), but you can find it jarring if you do... obviously we can disagree :)

As for Riker, I suppose in headcannon we can pretend he was making an argument that looked impressive but had no substance so that Picard could easily beat him.
msw188
Mon, Aug 18, 2014, 10:43am (UTC -5)
Yanks,
"And, if Data doesn't get the right to choose (loosely based on sentience) "he" IS no different than a toaster, or a computer driven assembly plant etc... The potential for a "race" only exists if Data wins."

I think this is the base cause for my disagreement with you. This attitude seems to suggest that when a conclusion is reached in a court, it is automatically correct. But if Data is declared to be a toaster by the court, while in fact being sentient, then the potential for a "race" does exist. It is this possibility that has the worst possible ramifications, regardless of how slim its chances are (and that slimness is in the eyes of Data only, not the Judge). In the absence of a workable definition for sentience and/or race, the avoidance of this possibility becomes the focus of Picard's argument, and I don't think that's out of place for a courtroom.

I don't have a strong opinion on the Guinan issue. It felt a little bit contrived to me to make sure that the black person brought this up on the Enterprise, but it does fit the in-universe characters. Guinan is wise and thinks about the bigger picture, not about singular logic. Picard trusts Guinan and always values what she has to say. They're also the two best actors in the series, and this is meant to be a big 'turning point' scene. If one is willing to allow potential consequences into a courtroom in cases where definitions are unclear (as I am), than Guinan's scene fits in the plot of the episode just fine regardless of the 'message' for today's audience. And that's the way such messages should be handled, I think.

PS: I only just found this website on the day I posted my first response (if there's some way to check, you can see it's my first ever post here). I can't say for sure if I would have brought any of this up before the Guinan comment, but I think I would have if you had ignored that part and just put forward the "trial should focus on Data and only Data" argument.
Yanks
Mon, Aug 18, 2014, 1:37pm (UTC -5)
Robert,

Just hate that being thrown in our faces. Too much like real life "news". But you’re right, probably just me.

I thought Riker aptly made his case, hell even the Judge said Data was a machine. While I'm not attorney, when I watched this Riker seemed to lay it all out there, very clearly. I mean without a clear definition of what sentience is how would he disprove data having it? Probably better to not bring it up and let Picard climb that mountain if he chooses.

msw188,

One can click on your name and see all your posts. (great points about 'Yesterday's Enterprise' and 'The Offspring' BTW)

I tend to be brutally honest :-) A failing I'm sure and you are correct, there are politically correct ways to express observations, I just chose to "post from the heart" :-)

When I first watched MoM early in 2002 (I think) it didn’t faze me, but now that Woopie is on The View etc… It changed my “viewing pleasure” I guess.
msw188
Mon, Aug 18, 2014, 2:09pm (UTC -5)
Yanks,
Cool, thanks for your kind words. I'm glad somebody might feel similarly about Yesterday's Enterprise.

Measure of a Man is one of those episodes that I'm pretty certain I saw back in the late 80s or 90s, but I didn't really remember. It was probably a bit over my head then. Looking at it now, and after all of this discussion, I think I'd give it a solid 3.5 stars. The arising of the conflict strains believability, and even though I think Guinan's remarks fit into the story and courtroom well, they still feel just a bit too much on the nose. The episode asks some very worthwhile questions and explores them well enough for me, but it still lacks the purely emotional element that the best episodes do carry.

For comparison, I'd currently give 4 stars to Offspring, Best of Both Worlds, and Darmok. I'm in mid-season 5 at the moment.
Yanks
Mon, Aug 18, 2014, 4:13pm (UTC -5)
msw188,

I'm currently just finished watching DS9 and am trying to catch up with my reviews. When I get to TNG, Offspring will most definitely get 5 of 4 stars. I'm a sap and tear up everytime I watch it. :-)
$G
Sat, Dec 13, 2014, 3:49pm (UTC -5)
A lot of lively discussion here. I wish I'd been around to partake!

As for this episode, it's TNG's "Duet" as far as I'm concern. The "so this show IS worth watching" moment. Even barring that, a fantastic hour. 4 stars easy. Top tier Trek.
Nic
Sat, Dec 20, 2014, 8:39pm (UTC -5)
I finally got the chance to see the extended version of this episode on Blu-Ray. I must echo ReptilianSamurai's comments. I didn't think it was possible to improve on a classic, but I was spellbound. Of particular interest is a scene where Riker and Troi discuss whether their view of Data's sentience is true or imagined (whether they anthropomorphize him). Even as a telepath, Troi isn't sure. This scene makes Riker's arc much more involving, and I paid much greater attention to Frakes' performance.

All in all, I think this episode matches BSG's "Pegasus" in emotional impact and social relevance.
Trajan
Wed, Jun 17, 2015, 1:08pm (UTC -5)
I've just caught up with this discussion following my original comments from years ago. I still hate the episode but appreciate that I'm in the minority. Incidentally, I liked the suggestion of using the Vulcan neck pinch on Riker. Much kinder than my earlier suggestion of a baseball bat!

I didn't know the writer of the ep. was a lawyer. One day I'd like to have a discussion with her as to the fairest form of judicial system for a post-scarcity society. However, I'll refrain from commenting on Measure of a Man again.

Anyone interested in alternative depictions of future legal systems could do worse than to check out Frank Herbert's 'The Dosadi Experiment' which always appealed to me when I was practising.
Macca
Sat, Jul 18, 2015, 5:17pm (UTC -5)
Disclaimer: I've only scanned through the comments above so sorry if this has already been discussed.

I revisited this episode yesterday after starting to watch the excellent Ch4/AMC series 'Humans'. A series about robotic servants who gain self awareness.

I thought Picard's arguments were excellent but I was interested to note that the judge only ruled that Data has the right to choose, not that he is a sentient life form. I'm scratching my head about a later episode of TNG that deals with this issue in more depth. Maybe someone can help me out?

I was also interested in one of Maddox' arguments that was largely ignored by the episode. That if data was a box on wheels with largely the same skills and ability to communicate would we be having this debate?

Are the crew motivated by the fact that Data looks like them and would they fight with the same zeal if data looked like the robot from Lost in Space?
Taylor
Sun, Aug 23, 2015, 3:05pm (UTC -5)
OK, I'm also an attorney but I can't say I care that the legal "procedures" may seem unrealistic - it's Star Trek and there are so many things that we could nitpick in any episode, many of them considerable improbabilities. Admittedly this is an individual thing, depends on how some episodes strike us, and sometimes I'm the nitpicker as well ...

The subject matter is compelling. Re-watching this episode I kept thinking of Blade Runner ... which very much taps into the "slavery" issue. That's an emotionally powerful angle, although the most intriguing aspect for me is simply that dividing line between humanity and AI, and how we envision an ultimate future when that line is blurred ... thus why's it's been the subject of so much sci fi over the decades.
Diamond Dave
Mon, Aug 24, 2015, 3:36pm (UTC -5)
A brave episode, tackling as it does some deep philosophical points that reaches its climax in the static form of a courtroom drama. It grounds itself in a thorough examination of both arguments - a particular highlight is Riker's unwilling but devastating critique. But more so is Picard's absolute certainty of Data's right to choose and his willingness to support him. And perhaps this is one theme that isn't explored - that as a human, we would feel no emotional attachment to a toaster, or a ship's computer, but would when serving with Data. And is that not a measure of Data's sentience?

To my mind the episode does have flaws - the presence of a handy legal officer who, surprise, has a back story with Picard, as well as the flimsy excuse to pit Riker and Picard against each other. But in its intelligent examination of the issues, this is a cut above the average episode. 3.5 stars.
Gabriel
Sat, Sep 12, 2015, 10:32am (UTC -5)
I'm here in 2015 watching it and just wanted to say that I dropped some tears. Really. Enough said.
Chef
Sat, Sep 19, 2015, 5:55pm (UTC -5)
And then a decade later a race of disposable beings were put to work by Starfleet, scrubbing plasma conduits.
Kiamau
Thu, Sep 24, 2015, 9:53pm (UTC -5)
This is TNG's first great episode but it is hardly TNG's "Duet."
grumpy_otter
Mon, Oct 19, 2015, 6:39am (UTC -5)
I love that there are actual lawyers above me discussing the legalities of this trial! I think they raise good points, but I still love this episode.

To me, this episode is much more about friendship than law, or Data's sapience. I teach history, and I actually show this episode to my students to demonstrate how important it is to be able to argue the other side of your thesis. If you do not recognize the strength of the claims on the other side, you cannot effectively argue your own case. If Riker could not put aside his belief to make the argument, his friend might have been lost.

I have one nitpick about this episode, though I recognize it is a common mistake made by many people, and is so common that the OED might as well change the definition. Data has no sentience at all because sentience refers to the ability to feel. What he DOES have is sapience--the ability to think. Self-awareness is a component of sapience. Even many animals have sentience, but do they have sapience?

CircleofLight
Mon, Nov 2, 2015, 3:56pm (UTC -5)
I love the arguments presented in the episode, but you really need to take off your lawyer hat before watching. And Star Trek is notorious for misunderstanding the law, from this episode to "The Drumhead" to "Rules of Engagement".

I'll just make one writing critique before I move onto the good. Forcing Riker to litigate against his friend and fellow officer is terribly forced and unnecessary drama. There's a huge conflict of interests between Riker losing a valuable officer which is going to make him perform his role badly no matter what threats some JAG officer throws out, so why does she even put him in that position to begin with? And, if Maddox was so apt on winning, why didn't he present the case on his own, or hire a professional who he could trust over Riker.

That aside, this episode is great because of the morality issues Jammer brought up in his review. Patrick Stewart's speech is well-delivered, and convincing despite his character's admitted misgivings towards law. Finally, this episode brings out a lot of interpersonal relationships Data has among the crew, and shows just how much of an impact the possibly-sentient android has.
K'Elvis
Tue, Mar 22, 2016, 9:49am (UTC -5)
As Data behaves as if he is sentient - he passes the Turing Test with flying colors - and has been accepted as a sentient being by the Federation and Star Fleet to this point, the burden of proof lies with Riker to prove Data is not sentient. Riker establishes that Data was built by a human, that Data is physically stronger than a human, and that unlike a human, Data can be turned off demonstrates that merely that Data is not human, which is neither in dispute nor relevant. It does not demonstrate that Data is not sentient. I suppose it would have been unsatisfying to simply rule in Data's favor because Riker failed to make the case that Data was sentient. Data maintains a presumption of sentience, so the slavery analogy remains relevant.
Robert
Tue, Mar 22, 2016, 11:02am (UTC -5)
"Its responses dictated by an elaborate software programme written by a man"

This part at least is relevant. I personally would not consider anything to be sentient if this was true. Riker is wrong, in "In Theory" Data adds his own subroutines for dating. And it's not the only time.

Computer programs that can improve/learn might be sentient. Computer programs whose "responses dictated [SOLELY] by an elaborate software programme written by a man" are not IMHO. I think what Riker was going for was that it was all a really convincing "act".

That said, he has no proof for that. And certainly not removing his arm or turning him off. Doctor Crusher could turn Riker off with a hypospray.
desmirelle
Wed, Jun 22, 2016, 1:09am (UTC -5)
This WOULD HAVE BEEN a great episode if it had been a flashback on Data getting into Starfleet Academy. A non-sentient being (flesh or machine) will not be allowed to go through the academy. It's an unspoken requirement. Were it not, the Enterprise would be making decisions, Picard would be a seat warmer, Riker would be decorative but not useful and Geordi relevant only because the ship doesn't have a pair of hands for the little things. As this episode stands, I expect the next episode of the JAG officer's life involved a court martial for trying to make law when she's a judge; but more importantly, for violating the civil rights of a being simply BECAUSE OF HIS RACE!!!!! (And I'm sure that would involve Federation charges, not military; there must be some guarantee of rights for the various species involved with the UFP.)

Chrome
Wed, Jun 22, 2016, 10:12am (UTC -5)
@desmirelle

The implication is that Data is a new type of being and they had no reason to bar him from joining Starfleet. The *legal matter* of his citizenship was never raised.

I could totally buy this happening, actually. There have been recent news stories where students in Texas have become valedictorians of their classes only to reveal that they're undocumented immigrants. The university these students got accepted to never exposed them, and in fact offered to support them if difficulties arose.
desmirelle
Wed, Jun 22, 2016, 9:58pm (UTC -5)
Chrome

Wrong analogy, The question was not "Was Data a citizen" the question was "Was Data Sentient" (which had as the unspoken codicil "and therefore master of his own destiny"). If we're treating the show seriously, as this episode wanted so badly to be thought provoking (and only ended up provoking me); then he had to be sentient to attend the academy, his citizenship be cursed for eternity.

My point is not that it was a bad concept (proving Data sentient), but that this particular question HAD to have been settled before he was allowed to attend). The number of students is limited, no parent is going to let a machine have their child's place without said machine proving it's just as "real mentally" as said child. The Command line study includes judgment calls; hence the episode where Troi keeps failing. Sentient is required. Citizenship was never addressed.
Chrome
Wed, Jun 22, 2016, 11:05pm (UTC -5)
@desmirelle

I think you took the analogy too literally. The point is even lofted institutions like Starfleet will overlook fundamental details we all take for granted. Starfleet might have been happy to let in Data just to boast to other institutions that the galaxy's only functioning android goes to Starfleet for its reknowned training.

Also, Judge Louvois wasn't ruling on sentience (stating sentience was a question better left to philosophers). All she cared about was whether Data, even as a possibly sentient machine, was property of Starfleet.

So, yes, you bring up good evidence that Picard also brought up: Data's service record. But that alone didn't settle the matter of Data's rights here.
desmirelle
Thu, Jun 23, 2016, 1:19am (UTC -5)
Chrome

I respectfully disagree. Melissa Snodgrass had a wonderful idea; they just misplaced it in time. I'm not arguing that the question shouldn't have been asked; I'm saying in order for Data to enter Starfleet, it had to be asked BEFORE he entered - otherwise they could've let a trained monkey attend and bragged on how well it did.

The underlying issue (and what she ruled upon) was sentience. The judge saying she wasn't ruling on it is ironic, since she would have done what she obviously wanted: made sure she made her name as a judge. Since Starfleet has no slavery policy, the only way Data could be property was if he was not sentient; ergo, sentience is the primary decision being made. The Enterprise, DS9, the starbase the case was being tried on, have no sentience and therefore are property. And if I'm going to take the premise seriously, I will say again, this was properly a flashback on how Data got into Starfleet academy. Starfleet is (as they kept hammering away at us with Wesley's attempts to take entrance exams) an elite organization with lines around the block waiting to get in. If we're taking this seriously as a story, it has to get me to suspend my disbelief. I can't if I'm to believe that this question wasn't answered well before this. It isn't logical.

Taking it seriously is the point of all this. (Which makes 'Genesis' so much more worse than the review says....)
Chrome
Thu, Jun 23, 2016, 4:39am (UTC -5)
You can disagree all you want, but Data's sentience was never decided at that hearing.

The original draft of the script (Google it), actually has Picard explaning before the hearing that he needs to prove Data is a sentient *life form". Data hears this and insists that he's only a machine.

And slavery was never an issue if Data was just considered a conscious machine. Conscious machines may be able to attend the academy, but Picard needed to press that Data was a life form for the sake of the hearing.

I suppose they could've kept the original script intact, but it was probably too lengthy. And I don't see how a flashback would fit, especially because TNG never uses them. This isn't Lost.
Desmirelle
Thu, Jun 23, 2016, 12:01pm (UTC -5)
*sigh*

You can keep writing it all you want, but sentience was the UNDERLYING question being decided. In order to be property, Data could not be sentient. If Data is not sentient, he has no business being in command of sentient beings. Now I'm back to why the Enterprise has no rank. What was written in an early draft is rewritten for a reason (like that exchange between Picard, Geordi & Data is illogical).

As a flashback episode, it would have excellent (and possibly given us more "what happened when" episodes which would have done us in good stead as an alternative to "Genesis").

Slavery was exactly the issue, as Guinan/Picard exchange highlighted. The problem was that their argument was circular and depended upon the sentience of the beings in question. Maddox wanted to treat a fellow Starfleet officer like his own personal Lego set. You can't do that to a sentient being; Crusher can't do exploratory on Worf just to see how Klingons work... It's insulting to expect me to believe that the issue hasn't ben decided BEFORE this point. That's my point.

You keep referring to Data as a "conscious" machine. To me that merely states that he's "on" as opposed to "off" - if you're using it to indicate he's aware that he's a machine and operating - since he did not want to be turned off , you're actually saying he's sentient.
Chrome
Thu, Jun 23, 2016, 12:05pm (UTC -5)
"sentience was the UNDERLYING question being decided. In order to be property, Data could not be sentient."

This was never stated in the episode. It was just the argument Picard used to push that Data was a life form.

"If Data is not sentient, he has no business being in command of sentient beings."

This is your opinion. Please back this up with statements from the series or comments from the writers. Otherwise, you're just stating your personal preferences, without giving us any objective reason why we should agree with you.
William B
Thu, Jun 23, 2016, 12:25pm (UTC -5)
I think the slavery analogy holds, because slaves *were* considered property, despite being sentient. The slavery argument is not actually circular -- it is important because it changes the scope of the conversation from one Data, who Picard says "is a curiosity" to an entire race. If androids are not sentient or sapient, "have no soul" to use Luvoix's final point, then it does not matter how they are treated. If it is *possible* that they are sentient or sapient, then what would be a regrettable outcome if *one* android had his rights trampled on would become a horror if an entire race came into being with their rights denied. The argument Picard puts forth is that there will be far-reaching consequences beyond the fate of one single android, and thus that the bar for proving that it is permissible to treat androids as property should be much higher. One can disagree with this, for example by arguing that it is just as much an injustice if *one* Data has his rights trampled on, but that is what Picard is saying.

Along those lines, my admittedly weak understanding is there are times in history where slaves *did* serve in the military (i.e. the American Revolutionary War).

One would expect Starfleet to have higher standards, of course. And certainly I think that the episode would be strengthened by some sort of explanation of how Data's status could be so unsettled at this point in time. But basically I think that there are some ambiguous areas of the law where people more or less follow something like habit until someone specifically challenges them. People who were not considered full persons could serve in the military at different times in history, and I see it as believable, on some level, that the decision of whether or not to admit Data to Starfleet and the decision of whether or not he was considered a person and even a Federation citizen were basically not considered the same one. We know that people who are not Federation citizens can join Starfleet (Nog, e.g.). Given that Data was found by Starfleet officers, it seems possible that they strongly advocated for him, perhaps pushing for some of the red tape to be pushed aside.

I think Maddox' argument would be strengthened if he claimed that Data was Starfleet property not because he's a non-person who joined the service, but because he was salvaged by Starfleet officers.

That said, I do find it a bit hard to believe that Data's status is quite so unsettled. The main reason I believe it as much as I do is that I am willing to accept a fair amount of bureaucratic/legal incompetence and uncertainty in dealing with Data in the years before this. In fact, a recurring theme of the series is that no one is really ready for Data -- they are unprepared for what happens if Data goes rogue (as in Brothers, Clues, etc.), they are unprepared for Data to "procreate," etc. I think it is believable in that Data is so carefully designed to placate people's concerns about him that people go into denial about the thorny issues that he poses; of course, Lore articulates that Data was specifically created to be less threatening to those around him, and while Lore's spin on it is partly because he's an evil narcissist, I don't think he's entirely wrong. I do wish that a bit more background on Data could have been provided, in particular how he spent his time between being found on Omicron Theta and on the Enterprise; he says in Datalore that he spent so many years as an ensign, so many as a lieutenant, etc., but we don't really know where and he is so...new, undeveloped in Encounter at Farpoint that I have seen people suspect that Starfleet kept him in relative isolation for several years. He says that Geordi was his first-ever friend.
William B
Thu, Jun 23, 2016, 12:43pm (UTC -5)
Which is to say, there are some logical holes in the arguments in this episode which one has to get past in order to appreciate it -- I am willing to suspend my issues because I think it is fantastic, and the episode at least *does* acknowledge, to some degree, that Data, Picard, Riker, Maddox, and Luvois are somewhat out of their league in even articulating the issues, let alone fully arguing them. Anyway, one of the issues that is not often brought up is that if you accept that Data is not a person but property, then one has still not established that he is *Starfleet* property, rather than, say, Picard's personal property. I get why this is skirted over, because, as I said, he was found by Starfleet officers and no *non*-Starfleet people have any reasonable claim on him besides himself, with Soong and the rest of the Omicron Theta colonists (apparently) dead.
Chrome
Thu, Jun 23, 2016, 12:45pm (UTC -5)
@William B

Once again, you make some excellent points. This is definitely a case of "ambiguous areas of the law where people more or less follow something like habit until someone specifically challenges them". Viewers may be astonished by Starfleet not answering an old and obvious question, but even in our own laws there remain a lot of legal uncertainties. The right for a person to decide who they can marry was only established in the U.S. last year after thousands of years of marriages.

Also, the "bureaucratic/legal incompetence and uncertainty" seem to recurring themes not just with Data, but with other legal questions. Surely how the law treats Data in this episode is a travesty to intelligent life forms, but then we get episodes like "Rules of Engagement" where were shown that Starfleet is very ready to throw of the rights of its own sapient officers to an aggressive power for political reasons.

Some background into Data would've been nice, but I don't think it was necessary for this episode to work. The judge ends with the verdict of Data's nature being an open-ended question. If the episode gave us the answer in a flashback to an earlier time when Data was established sentient, there'd be absolutely no reason to consider Riker or Maddox's arguments.
Peter G.
Thu, Jun 23, 2016, 1:07pm (UTC -5)
The funny thing is, the episode really isn't about whether Data in particular is sentient, but about how to define sentience in the first place. And since the writers don't have an answer for that I can see why their resolution was open-ended. What IS the difference between Data and the Enterprise computer? A more sophisticated neural net? Simply the directives each is given? We already know that Data is 100% susceptible to any change in programming, completely undermining the Data we knew before. Then again a person's mind can be messed with as well. However no one programmed that human from scratch, whereas 100% of Data's personality stems from programming that learned and expanded itself.

What if the Enterprise computer was given directives to teach itself too? Would it have a right to decide where it wants to go? It's simply a matter of programming the AI. So to me, the real question is about AI, not about man vs machine. Since Star Trek has a virtually non-existent track record on the issue of AI this was obviously not going to be addressed, even though it's the only issue to discuss here. Is it possible to create sentient life just by chaining together strings of code and clicking "save file"? If so, the Federation might need to have some strict laws about irresponsible creation of life by programmers. It's hard enough to argue that a string of code is life at all, no less sentient, since it's appeals to having wants are reflections of code inserted.

For instance I can write a 20 line code in BASIC that says "I am alive", and when asked if the program wants to die, it will reply with "Please, I do not want to die." Just seeing that phrase on the screen might pull heartstrings, but I think defining a line of code that says "I want to live" as being sentient scraps any meaningful sense of the word. Is Data inherently different than this 20 line piece of code, really?

The ending I would have liked would have been for them to say they could not make a determination on Data since his technology was beyond their understanding. The reason to keep him in Starfleet with his own set of rights should have stemmed from a mutual decision by all parties to *choose* to recognize his rights as an act of goodwill towards a potential life form; to err on the side of respect even in the face of the unknown. That is the Federation way, and that's what should have made the final determination.
FlyingSquirrel
Thu, Jun 23, 2016, 1:28pm (UTC -5)
I'm not sure the Enterprise computer would really be considered an AI. My recollection is that it does have certain "canned" responses when asked a question it doesn't understand, suggesting that it is programmed to understand a variety of speech patterns but doesn't actually think on its own. It's perhaps closer to what would be called a virtual intelligence in the Mass Effect universe:

masseffect.wikia.com/wiki/Virtual_Intelligence

As for Data and the question of AI sentience, I'm not sure that's a question that anyone, no matter how far in the future, can answer, simply because consciousness is a subjective experience. You can't prove that Data is actually self-aware and conscious, but you can't prove that about anyone other than yourself either. Yes, he's vulnerable to being reprogrammed, but humans have been known to exhibit personality changes due to brain injuries, and nobody would argue that they're no longer sentient or conscious at that point. My feeling is that any AIs with the same range of behavior as what Data (or the Doctor on Voyager) exhibits should have the same rights as humans out of a principle of erring on the side of caution - I'd rather grant human rights to non-sentient beings than deny them to sentient beings.
Peter g.
Thu, Jun 23, 2016, 1:43pm (UTC -5)
@ FlyingSquirrel.

I guess I shouldn't bring up "Emergence", in which the Enterprise computer (or maybe all its integrated systems along with the computer) develops signs of life. The reason I shouldn't is because the episode is dumb.

Anyhow, I get why it's tempting to say "we'll never know", but at the end of the day a determination has to be made about which kinds of code would or would not count as sentient life. You might want to be agnostic or just say they're all sentient, but then can you arrest and jail someone for writing a program and then deleting it? This is the kind of issue we're talking about. Can someone 'murder' Data in the legal sense, or merely destroy him in an act of vandalism? And what if Data's neural net was contained in a box instead of in a humanoid body? Same answer?
William B
Thu, Jun 23, 2016, 2:06pm (UTC -5)
@Peter G., absolutely it should be made clear (in-universe, and would be good to have been made clearer for the audience) where the lines are drawn between Data and the computer and other technological life forms. That said, I think it is analogous to arguments about biological life forms. Humans have certain rights. Non-human animals, particularly mammals, have some very limited rights. Plants have virtually none, with a few particular exceptions (protected forests, and sometimes individual trees). The difference between Data and a twenty-line length of code might be equivalent to the difference between a human and a virus. That Data is the only settled issue in this episode strikes me as believable; the Federation should be a more enlightened body, but they are fumbling in the dark here, and the lack of rigour in the human process of defining the legal differences between humans and other animals and the reason behind it makes me find the halting way in which "AI rights" are dealt with on a case-by-case basis in Trek.

I agree that a little more focus on what the difference between Data and the Enterprise computer *is* would be appreciated. That said, I think that the tactic that the episode takes, which is to ask what distinguishes Data from a human, is also pretty valid. The main qualitative difference between Data and a 20-line bit of BASIC code is that Data has an adaptive program which is, as stated in the episode, able to adapt to new information. Data believes that there is an ineffable quality that goes with it, and Data's friends would tend to agree. There is no way for us to guarantee this. The positronic brain is designed to replicate aspects of the human brain, in part (as well as other qualities) in an attempt to not just emulate but also reproduce humanity. All right, so the question is what distinguishes Data's brain from a human brain. There are a few possibilities:

1. Humans are sentient in ways that require some sort of metaphysical element. There is some element of humans that make *any* "constructed" being impossible to program to have human level sentience, perhaps because there is something in humans (and perhaps other living beings) that is not dependent on the physical at all.
2. Humans are sentient in some way that obeys entirely the physical laws of the universe. It may be possible to create a "constructed" sentient being, but Data is not one.
3. Humans are sentient in some way that obeys entirely the physical laws of the universe, and Data also is a sentient being who similarly is able to exist (as an emergent phenomenon). Some "constructed" intelligences do not have this quality.
4. Humans are sentient, as are all "constructed" intelligences, forming some sort of spectrum.
5. Sentience in the way that we tend to describe it does not actually exist; it is an illusion common to humans that they are sentient but it is not true in any particular way. Data is not sentient either, of course, and so what happens to him hardly matters, but the same extends to humans.

5 is mostly eliminated because we *experience* sentience in ourselves, and conclude that other humans are likely to have a sufficiently similar experience to ourselves. However, it can also go the other way. I could certainly imagine Lore, if he were so inclined, arguing that it is impossible for biological beings created by chance with brains running on electron rather than positron signals to be truly sentient.

Anyway, my impression is that the positronic net of Data's is sufficiently similar to the human brain in physical functioning, despite in other ways being very different, that it is reasonable to believe that whatever process that endows humans with sentience endows Data as well. Maddox is, of course, right that some of the reason for concluding this about Data rather than a box with wheels is that Data is designed to look human and to be anthropomorphized. That is rather the subject of The Quality of Life (important though obviously flawed).

That Data is a constructed being does not seem to me to be necessarily all that important. Certainly, it may be that it is impossible for a human to construct something with sufficient physical sophistication to match the complexity of the human brain; essentially humans are competing with millions of years of natural selection. However, if Data has an internal life and sentience, then he has it, and it does not seem to me that it diminishes his internal life that his brain was constructed with conscious intent. In any case, if the argument is not about the experience of sentience and internal life but a matter of free will and ability to break free of programming, I do not think it is a settled matter that humans are able to break free of the physical states in the brain, or of broader biological programming; that people are unpredictable can be a matter of all variables being too complex to account for, or of simply random processes which are similarly not controlled (i.e. quantum indeterminacy does not actually imply that random outcomes are *chosen* by consciousness). I think this is what Luvois is saying when she argues that she does not know if she has a soul. While she presumably does believe that she is

I tend to think that the episode does more or less end with Luvois (and Picard, to a degree) *deciding* to choose to err on the side of granting Data rights. The episode does to some extent frame the decision, at least on Picard's part (in Picard's argument) as being a matter of living up to the Starfleet philosophy of seeking out new life, and of wanting to be clear that they should consider what kind of people they are; whether "they" refers to humans or to the Federation at large is hard to say, because there is still some ambiguity (in both TOS and early TNG, and to some extent extending forward) about whether the subject of the show is *specifically* humanity or of the Federation overall.

To me, I think that Data's story, including this episode, has a lot of resonance even if it is at some point, somehow, conclusively proven that no electronic or positronic created devices could ever have something like consciousness. Whatever else we humans may be, we are also physical beings, who obey physical laws, who, like Data, come with hardware and develop changing software from our learning over time, and whose ability to make our own choices is not entirely easy to understand. Even things like the emotion chip have resonance, given how much easier it is to change one's emotional state with certain drugs or other treatments. Is it possible to find meaning while viewing our identities as intrinsically (perhaps even *exclusively*) tied to the physical reality of our brains? Can we define a soul without recourse to metaphysics? This is not even arguing that there *is* no metaphysical explanation for a soul, but with Data a biological or theological appeal to our humanity is eliminated, for one character at least. I think that this is a lot of what this episode is about.
FlyingSquirrel
Thu, Jun 23, 2016, 2:12pm (UTC -5)
@ Peter G.

"Emergence" was goofy, but wasn't there a scene where they discussed what sort of action to take in light of the fact that they might be dealing with a sentient entity that was trying to communicate? Also, I think the idea was that while a sentient mind seemed to have somehow developed from the ship's computer, the computer in its normal state was not sentient or self-aware.

My own view on AIs, incidentally, is that we probably shouldn't create them if we aren't prepared to grant them individual rights, precisely because we'll end up with these potentially unanswerable questions. I don't know enough about computer science to answer your question about writing and deleting a program, just because I'd need to know more about what would go into the potential process of creating an AI and what kind of testing could be done before activation.

If Data were contained in a box instead of an android body, I actually don't have much trouble saying yes, he should have the same rights. Obviously he wouldn't be able to move around, but I'd impose the same prohibitions against turning him off without his consent or otherwise messing with his programming.
William B
Thu, Jun 23, 2016, 2:23pm (UTC -5)
Incidentally, I like the way Data's status remains somewhat undecided throughout the show. The Federation, I think, *should* actually make up its mind and make a harder determination of what his rights are, but the fuzziness strikes me as very believable. I like, too, that even *Picard* swings between being Data's staunchest advocate and using the threat of treating Data as a defective piece of machinery in something like "Clues." And even Data vacillates. In particular, note that Data's deactivation of Lore in "Descent" goes without much fanfare; certainly Lore is dangerous to the extreme, but I suspect that if Data had killed a human adversary in quite the way he takes Lore out, that there would have been more questions asked about whether he did everything he could. I have been wanting for a while to write about Data's choice in "Inheritance," and how I think his decision not to reveal to Julianna that she is an android reflects a great deal about how Data views himself and his status and his quest for humanity at that point in the series and the tragic connotations thereof. Even though everyone on the show more or less takes the leap of faith that Data is, or has the potential to be, a real boy, it's an act of faith that needs to be regularly renewed and it gets called into question, with characters suddenly reversing themselves because no one is really that sure, even though he's their friend.

I do think that there are some significant problems with the show for not going far enough with Data (and later the Doctor) in following through all the arguments about where exactly the boundaries are supposed to be between personhood/non-personhood, and in allowing other characters to maintain a kind of agnosticism when they should really have to make up their minds definitely. I think that it's very reasonable to object and I don't want to try to come across as saying that one *has* to like the show's overall attitude and the contradictions it runs into. That said, it generally works very well for me with Data (and from what I remember, pretty well with the Doctor). I think that the...emotional dynamics, for lack of a better term, generally work, but there's no question for me that some of the wishy-washiness on the character and what distinguishes him from other AI and what distinguishes him from humans and etc. is the result of writers' backing off from some of the challenges posed by the character rather than *purely* leaving the character's status regularly open for re-review for emotional/mythological reasons. I like the result a lot because what *is* done with Data in the show means a lot to me personally and so I'm willing to overlook a fair amount, but I don't expect everyone to.
Peter G.
Thu, Jun 23, 2016, 2:41pm (UTC -5)
@ FlyingSquirrel,

If you afford sentient rights to Data-as-a-box, then the physical housing becomes irrelevant and what you're really doing is granting sentient status to code. That's fine in a sense, but that opens up, as mentioned, a massive quagmire about who can write this code, delete it, alter it, and even maybe about what kinds of attributes it can be given in the first place. Should it be illegal to write a line of code that makes the program "malevolent"? How about merely selfish and prone to kill for gain as Humans now do? It goes beyond the scope of the episode, but my feeling on the subject is that the episode gives a lot of unspoken weight to the fact that Data has been largely anthropomorphized. Maybe Soong did that on purpose to protect his creation using the sympathy of others to its shape.

@ William B,

There are certainly gradations of biological life, and although we're hazy on whether there are levels of sentience (or any sentience) there are clear differences in, basically, cognitive capacity among animals which lets us categorize them by importance. For the most part we protect intelligent animals the most, and mammals get heavier weight than non-mammals. But it's easy to see why we can do this: we can either identify outright biases (we sympathize with fellow mammals) or else identify clear distinctions in intelligence and give greater weight to those closest to sentience. That makes sense for the time being.

For AI, however, we have no such easy set of distinctions because, frankly, we don't live in a world full of various AI's to study and compare. We basically have a lack of empirical experience with them, but the difference is that while we couldn't have known what a cow was until we saw one, we certainly can know what certain kinds of AI would look on a theoretical level. Maybe not advanced code the likes of which hasn't been invented yet, but certainly anything binary and linear such as we have now (and which I suspect Data is as well; he is not a quantum computer).

And IF it's feasible to differentiate between different *types* of code - one is rigid and preset, one learns but its learning algorithm is preset, one can change its programming, etc - then this would be the determining factor in creating a hierarchy of rights for AI. Again, I see this particular discussion as being the real one to be had about Data. Whether he's 'like a Human' or not is an extremely narcissistic way to approach the topic. The question isn't whether an AI resembles a Human, but how AI contrasts with other AI. Is Data just an extraordinarily complex BASIC program that does exactly what it's told to do, no more or less? Note again that issuing phrases such as "but I want to live" can be written into any software of any simplicity, and thus the expression of such a 'desire' shouldn't be confused with desire. I can write the same thing on a piece of paper but the paper isn't sentient. The court case in the episode seemed to take very seriously Data's 'feelings' for Tasha, even though they failed to address whether those were really 'feelings' or just words issued in common usage to suit a situation.

To be honest, even after having seen the show I'm not quite sure whether Data should have been considered Starfleet property or not. It does seem like an extravagant waste to avoid having androids on every ship seeing as how Data personally saved not only the Enterprise but probably the Federation multiple times. And as for the argument of human lives being saved in favor of risking androids...well...duh? Isn't that a good thing?
Chrome
Thu, Jun 23, 2016, 3:34pm (UTC -5)
@Peter G. and William B

I took Louvois' ruling to mean that if a machine is so life-like that it has the perception of a soul, then it can't be considered property and has the right to chose.

That's the difference between the Enterprise computer and Data. No matter how sophisticated the Enterprise is programmed, it's still missing those very life-like qualities that Data showed in this episode (intimacy, vanity, friendship, a career, etc.).
Peter G.
Thu, Jun 23, 2016, 3:53pm (UTC -5)
@ Chrome,

"I took Louvois' ruling to mean that if a machine is so life-like that it has the perception of a soul, then it can't be considered property and has the right to chose."

This is kind of my problem. The things described in the episode don't show Data to be life-like, but rather Human-like, which is a significant distinction. It means that entities that emulate being a literal Human Being will receive favorable treatment by the Federation. I'm sure plenty of sentient life-forms in Star Trek don't have 'intimacy' or 'friendship' in the ways Humans know it, so I'm not sure how those should matter (but to a judge out of her depth I can see how she could be unaware enough to think it should). And just a quibble, but Data doesn't have vanity; his having a career is also a circular argument because the argument about whether he *should* have a career relies on him having the rights afforded to sentients.

The Enterprise computer wasn't designed to have personality or look like a person, but it could have been. Would the aesthetic alterations in the programming have made it suddenly sentient because it *seemed* more sentient? If that's all it comes down to then I would confidently state that Data is not sentient. But they did dally with giving the Enterprise computer a personality in TOS, and although it was played as comedy (choosing the personality matrix that calls Kirk "Dear" was probably some engineer trolling him) the takeaway from that silly experiment was to show that some Human captains like their machines to sound like machines and not to pretend (poorly) to be like Humans. To pretend in that way could be felt as an insult to Humans. But what about a machine that acknowledges it is a machine, and acts like one, but wants to be more Human? That's the recipe for ego stroking, and again I wouldn't be surprised if Data's entire existential crisis wasn't a pure magic trick played by Soong to get people to like Data (and thus to protect his work).
William B
Thu, Jun 23, 2016, 4:22pm (UTC -5)
@Peter G.,

I agree that intelligence and cognitive ability is the main way in which we distinguish different types of animals, and as you say also the similarity to humans is what tends to grant mammals special rights. Within this episode, I think we are seeing something similar with Data.

Picard asks Maddox to define sentience, and he supplies intelligence, self-awareness and consciousness.

PICARD: Is Commander Data intelligent?
MADDOX: Yes. It has the ability to learn and understand, and to cope with new situations.

I submit that this is the way the episode argues what is different about Data from a toaster, or a piece of paper; it is also what distinguishes animals granted special rights from ones which are not. It is not stated explicitly, but I believe that this ability to "learn and understand, and to cope with new situations" is what is missing from other code. Eventually Voyager's computer has bio-neural gel packs, but for now there is no indication that other systems encountered have a neural net like Data's, which is designed to emulate the functioning of the human brain. I think that Picard is being a little jokey in his statement that Data's statement of where he is and what this court case means for him proves that he is self-aware, because "awareness" to some degree requires consciousness. Really, I think that consciousness is necessary but not sufficient for full self-awareness, and the part that is not covered by consciousness is covered by Data's statement of his situation. That is indeed the same quality that a piece of paper which has "I am a piece of paper!" written on it; if that piece of paper has consciousness and somehow controls its "I am a piece of paper!" statement, then it would be self-aware. In any case, the episode did not argue that the Enterprise computer is not sufficiently intelligent in the sense of adaptability etc. to meet the criterion for sentience; that the ruling is on Data alone rather than all AI is a function of the narrowness of the case.

The combination of intelligence and "self-awareness," which is really the demonstration of the component of self-awareness that is not covered by consciousness, is what makes Data an edge case where consciousness is the essential final component, and "I don't know" becomes sufficient. Animals which are "conscious" but with no evidence of self-awareness or intelligence do not have rights, and thus AI which are not intelligent on the level of Data (who has human-level adaptability) will never have the question of whether they have feelings or consciousness raised at all.

How do you prove that something is or is not conscious? And that is why the human-centrism is important; basically that is the *only* tool that humans have to demonstrate consciousness or internal life, or lack thereof. I know that I have consciousness or internal life, and therefore beings that demonstrate qualities similar to mine, and have a similar construction to mine, are likely to be consciousness. I am not claiming this is great; it is of course narcissistic. But all arguments about consciousness start from human-centricity because the only way we have to identify the existence of an inner life is by our own example, or, at least, I have a hard time imagining any other way. In any case, demonstrating that Data (states that he) values Tasha is a way of demonstrating that Data (states that he) has values, wishes, desires which were not programmed into him directly, which adds weight to Data's stated desire not to be destroyed. It also emphasizes that Data has made his own connections besides those which were specifically and trivially predictable based on his original coding -- again, the ability to learn and adapt etc. To some degree, the idea that animals can be sorted by cognitive ability but that "cognitive ability" and intelligence would not automatically be a sign that computers have some degree of internal life is because of similarities to humans -- animals come from similar building blocks to humans (DNA, etc.) and so it is assumed that their intelligence corresponds to something similar to our own, which we know to value because we experience our own value. Now, obviously by the time of the show, humans have met other species which are sentient...but I think that the sentience is still primarily demonstrated by being similar to humans. How can anyone possibly know if anyone else is sentient? The only possible way is to either take beings at their word, or to build through analogy to one's own experience. The only being I can be sure is sentient is me; everyone else's sentience is accepted based on people being sufficiently close to me. I think that humans should expand outward as much as possible and not rely entirely on chauvinism, but I have no idea how exactly I would determine if a fully alien being which claimed that it was sentient truly experienced sentience or was just able to simulate it.

(Actually, I don't "really" know that my experience of consciousness is real, but I am still experiencing something, so I go with that.)

As to whether his statements that he values his life, or values Tasha, etc., indicate that he actually values them, this is what it means for Picard to ask if Data is conscious. If Data is not conscious, then his statements are just the verbal printouts of an automaton; if he is conscious then they are, on some level, "felt." And here I agree that Picard fails to make much of a case; all he does is ask whether everyone is sure. If I were Picard, I would start arguing that the similarity of Data's brain to a human brain and the complexity of his programming indicate a sufficient similarity in all observables to the human brain for us to conclude that it will likely have other traits in common with human brain, including consciousness. Even without the comparison to humans, though, it may happen that we can never fully assume that any rock or mountain or collection of atoms is *not* conscious, and we must accept a certain level of intelligence and self-awareness as sufficient benchmarks to declare something sentient. This is, of course, very unsatisfying, but it is unsatisfying, too, to begin with the presumption that only beings which are sufficiently similar to humans in physical nature (i.e. made up of cells and DNA) have the possibility of consciousness. To simply suspect that anything in the universe might have consciousness is a simplistic direction for the episode to go in, granted, which is why I prefer to think that the implication *is* that it is Data's similarity to humanoids in terms of systems of value, cognitive ability, adaptability and even in design (neural net which is designed to reproduce the functioning of a brain) which may not be possible without emergent consciousness. I think that most people would agree that it seems *more likely* than something that demonstrates intelligence would also have consciousness than something which demonstrates no intelligence, and so I do think this is one of the implicit elements in Picard's argument, which would make it much stronger, though it is also not entirely necessary.

One troubling question is what it would mean to program androids like Data, with a similar level of cognitive ability and similarity to humans, *without* any desire for self-preservation whatsoever.

I actually do agree, though, that Data is designed for ego-stoking. Actually that is some of the point -- Lore complicates the story because Lore immediately recognized his superiority to humans in physical and mental capacity, immediately came into conflict with people who hated him, and promptly had to be shut down. Data *is* meant to be entirely user-friendly. I think it's also true that Soong intended Data to be a person with internal life, but Data's desire to be human in a nonthreatening way, and the reality that it is basically impossible for him to achieve that, is pretty baked into him, which is tragic if you believe that Data *does* have some sort of internal life, as I tend to.
William B
Thu, Jun 23, 2016, 4:38pm (UTC -5)
@Chrome, Peter G. response:

The Turing Test is not namechecked in the episode, but it does sort of remain here: if there is no airtight argument that Data is less a person than Picard, why should Picard have more rights? This is the essence of Picard asking Maddox to prove that he is sentient; the main arguments that Maddox could supply that Picard is conscious and Data isn't are:

1. Picard is more similar to Maddox (in being a biological life form), and, implicitly, to Louvois;
2. Data was created deliberately, rather than by a random physical process;
3. (MAYBE) These are the things we don't know about the human brain, whereas this is what we know about the android net/software.

With respect to 3, obviously there are aspects of Data's programming and design which are unknown, hence the need to disassemble him. With respect to 2, Picard punctures this by suggesting that parents create their children, though it is an incomplete argument. With respect to 1, well, that is part of the reason I think Picard brings up Data's intimacy etc. One could argue that he is appealing to human biases, but he is perhaps working the opposite direction -- by showing the similarities of Data's behaviour to humans, he is countering the natural bias that he is probably not conscious because he is different in "construction" to humans. I'm not really saying I've knocked down all of these (or other potential ones). But rather than starting with why-is-Data-different-from-a-toaster, if you start with why-is-Data-different-from-a-human then Louvois' ruling makes sense. In that case, it is a significant kind of chauvinism that Picard (and Data, for that matter) do not start heading forth and trying to figure out whether they should liberate the Enterprise computer, or weather patterns, or rocks which no one would even think to wonder about, but the "AI rights" arc is not done; and yes, I do think that there is more evidence for Data's cognitive powers than the computer's, though it also has some degree of adaptability.
William B
Thu, Jun 23, 2016, 7:41pm (UTC -5)
Though, of course, the Enterprise computer can run and possibly create holodeck characters of great sophistication, so, there is a strong case of the Enterprise computer being intelligent, which certainly complicates things. That said, I don't think this means Louvois' ruling (etc.) are wrong; rather, the ruling on Data should ideally open up discussion on other sophisticated AI which have "sentient life form"-level intelligence.
Peter G.
Thu, Jun 23, 2016, 8:00pm (UTC -5)
My main issue with issuing 'sentient status' to any advanced intelligence is because at bottom intelligence is just processing power. When constructing an AI I find it troublesome to consider that the sole factor separating one AI from another might be a superior CPU, and that makes it 'sentient' and thus affords it rights. Does that mean I'm better off with a slower computer than with a more advanced one, because the latter has the right to tell me what it wants to do?

Some people theorize that consciousness is emergent is a sufficiently advanced recursive processing system. Others say something in the biology matter, perhaps as a resonator with some unseen force. Either way giving rights to technology is a big deal.
William B
Thu, Jun 23, 2016, 8:14pm (UTC -5)
I will say though that I do think the human-centrism of the episode is a problem insofar as one would expect that there *is* by this point in the Federation some sort of procedure for talking about sentience in non-human terms. Since the vast majority of species encountered in Trek, and especially TOS and TNG, who are accepted as sentient and having of rights are humanoid and very similar to human beings, this is not exactly a problem for the episode, so much as revealing of one of the major limitations of imagination of Trek. Alternatively, this works to some degree because this *is* a myth which is fundamentally about humanity more so than it is actually about anything to do with aliens, and so it makes sense that arguments about machines end up being human-centred.

It is actually pretty disturbing, thinking about it. I do pretty strongly think that the tactic Picard eventually settled on is correct, which is to argue that it is not possible to conclusively demonstrate that Data is sufficiently fundamentally different from Picard to be classified as a different being. However, the point remains of what happens to entities which are sentient but do not *have* a survival urge. This is potentially the case for the Enterprise computer. Certainly, Soong decided to program Data to "want to live"; it seems from various statements made over the years, including by Soong himself, that he intended to create Data as having consciousness and being something like an actual human, with some adjustments made so as to avoid the mistakes of Lore and perhaps to improve on humans. Assuming for the moment that he succeeded in creating a being which has consciousness (and thus sentience), the possibility remains that he could have programmed Data with similar skills but no "desire" for self-preservation or self-actualization.

However, this is not simply a matter of AI. Eventually genetic engineering on a broader scale should be possible, and what happens then? Could beings of human intelligence with no desire for self-preservation beyond what is convenient for their masters be created through a combination of genetic and behavioural work?

It makes a lot of sense that Soong, who really did see his androids as his children and wanted them to be humanlike, would program them to survive and thrive, and, after the catastrophe of Lore, made Data to survive, thrive, and also be personable enough that he would not have to be shut down. Some of this is obviously Soong's own vanity, but some of it is the same sort of vanity that shows up in many parents' desire for their children to carry on their legacy. I like that Voyager complicates some of the Data material by having the Doctor be quite unpleasant, much of the time; whereas Data is designed for prime likability, the Doctor is abrasive and difficult.
William B
Thu, Jun 23, 2016, 8:52pm (UTC -5)
@Peter G., Fair enough.
Chrome
Fri, Jun 24, 2016, 10:06am (UTC -5)
@Peter G.

"This is kind of my problem. The things described in the episode don't show Data to be life-like, but rather Human-like, which is a significant distinction."

True, and this goes back to what William B mentioned about the writers being limited in describing Starfleet generally because they only have the human experience to draw from. Incidentally, that vanity thing I mentioned is actually a line from this episode when Data curiously decides to pack him medals when he leaves Starfleet.

But you're right, the episode doesn't really describe what criteria Data has which qualifies him as sentient and the computer as non-sentient. I suppose Data seems more self-aware than a computer, but it's hard to tell if he's acting on incredibly complex programming or something greater.
William B
Fri, Jun 24, 2016, 11:08am (UTC -5)
One thing I will still add is that the comparison to animal life still holds in some ways, especially given that certain animals were selectively bred (over millennia) for both intelligence and ability to interact with humans. Putting human intervention aside, if you need a more intelligent animal, ie for a service animal for the blind, you have to have a dog rather than a spider and you have to treat it better. If you want a pet you can pull the legs off with relative impunity, get a spider not a dog. It may end up being that a scale for defining intelligence on computers will be introduced in terms of adaptability etc and that it will be necessary to have less adaptable computers to be able to treat it ethically. Since intelligence (and, really, intelligence as defined by ability to do human-like tasks) is the main measurement for animal life value, I expect it is likely to be one for AI if a sufficiently rigorous theory of consciousness is not forthcoming.

I am troubled, in the end, by the human-centricity of the arguments about Data and the lack of extension to other computers. That said, there are still two directions: if Data is mostly indistinguishable from a humanoid except in the kind of machine he is, Picard's case stands and it is chauvinism to assume that only biology could produce consciousness; if Data is mostly indistinguishable from other machines except in his similarity to humans, then Peter G.'s point stands and it is chauvinism to only grant rights to the most cuddly and human of machines. Both can be true, in which case the failure of imagination on the part of the characters and likely writers is failing to use Data as a launching point to all AI. Even the exocomps, the emergent thing in Emergence, and various holodeck characters are still identified as independent beings whereas the computer itself is not, which reveals a significant bias toward things which resemble human or animal life.

For what it's worth, I continue to have no doubt Data was programmed to value things, have existential crises etc., in conjunction with inputs from his environment, but I continue to believe that this does not necessarily distinguish him from humans, who are created with a DNA blueprint which creates a brain which interacts with said blueprint and the environment. Soong programmed Data in a way to make him likely to continue existing, and humans' observable behaviours are generally consistent with what will lead to the survival of the individual and species. To tie into the first scene in the episode, Data may be an elaborate bluff, but so might we be. Of course that still leaves open the possibility that things very far from human, whether biological, technological, or something else entirely, can also possess this trait. And again it seems like cognitive ability and distance from humans are the things we use now; probably given the similarity of humanoids, cognitive ability and distance from humanoids will be the norm. I would like to believe there is something else that could make the application fairer and less egocentric. But it seems even identifying the root of human consciousness more precisely (in the physical world) would just move the problem one step back, identifying "this particular trait we have" as the thing of value, rather than these *other* traits we have.
Andy's Friend
Sat, Jun 25, 2016, 9:04pm (UTC -5)
@All

You have to go much further. You have to stop talking about artificial intelligence, which is irrelevant, and begin discussing artificial consciousness.

Allow me to copy-paste a couple of my older posts on "Heroes and Demons" (VOY). I recommend the whole discussion there, even Elliott's usual attempts to contadict me (and everyone else; he was the rather contrarian fellow). Do note that "body & brain," as I later explain on that thread, is a stylistic device: it is of course Data's positronic brain that matters.


Fri, Oct 31, 2014, 1:29pm (UTC -5)

"@Elliott, Peremensoe, Robert, Skeptikal, William, and Yanks

Interesting debate, as usual, between some of the most able debaters in here. It would seem that I mostly tend to agree with Robert on this one. I’m not sure, though; my reading may be myopic.

For what it’s worth, here’s my opinion on this most interesting question of "sentience". For the record: Data and the EMH are of course some of my favourite characters of Trek, altough I consider Data to be a considerably more interesting and complex one; the EMH has many good episodes and is wonderfully entertaining ― Picardo does a great job ―, but doesn’t come close to Data otherwise.

I consider Data, but not the EMH, to be sentient.

This has to do with the physical aspect of what is an individual, and sentience. Data has a body. More importantly, Data has a brain. It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.

Peremensoe wrote: ”This is a physiological difference between them, but not a philosophical one, as far as I can see.”

I cannot agree. I’m sure that someday we’ll see machines that can simulate intelligence ― general *artificial intelligence*, or strong AI. But I believe that if we are ever to also achieve true *artificial consciousness* ― what I gather we mean here by ”sentience” ― we need also to create an artificial brain. As Haikonen wrote a decade ago:

”The brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers.”

This is the main difference between Data and the EMH, and why this physiological difference is so important. Data possesess an artificial brain ― artificial neural networks of sorts ―, the EMH does not.

Data’s positronic brain should thus allow him thought processes somehow similar to those of humans that are beyond the EMH’s capabilities. The EMH simply executes Haikonen’s ”programmed strings of commands”.

I don’t claim to be an expert on Soongs positronic brain (is anyone?), and I have no idea about the intricate differences and similarities between it and the human brain (again: does anyone?). But I believe that his artificial brain must somehow allow for some of the same, or similar, thought processes that cause *self-awareness* in humans. Data’s positronic brain is no mere CPU. In spite of his very slow learning curve in some aspects, Data consists of more than his programming.

This again is at the core of the debate. ”Sentience”, as in self-awareness, or *artificial consciousness*, must necessarily imply some sort of non-linear, cognititive processes. Simple *artificial intelligence* ― such as decision-making, adapting and improving, and even the simulation of human behaviour ― must not.

The EMH is a sophisticated program, especially regarding prioritizing and decision-making functions, and even possessing autoprogramming functions allowing him to alter his programming. As far as I remember (correct me if I’m wrong), he doesn’t posses the same self-monitoring and self-maintenance functions that Data ― and any sentient being ― does. Even those, however, might be programmed and simulated. The true matter is the awareness of self. One thing is to simulate autonomous thought; something quite different is actually possessing it. Does the fact that the EMH wonders what to call himself prove that he is sentient?

Data is essentially a child in his understanding of humanity. But he is, in all aspects, a sentient individual. He has a physical body, and a physical brain that processes his thoughts, and he lives with the awareness of being a unique being. Data cannot exist outside his body, or without his positronic brain. If there’s one thing that we learned from the film ”Nemesis”, it’s that it’s his brain, much superior to B-4’s, that makes him what he is. Thanks to his body, and his brain, Data is, in every aspect, an independent individual.

The EMH is not. He has no body, and no brain, but depends ― mainly, but not necessarily ― on the Voyager computer to process his program. But more fundamentally, he depends entirely on that program ― on strings of commands. Unlike Data, he consists of nothing more than the sum of his programming.

The EMH can be rewritten at will, in a manner that Data cannot. He can be relocated at will to any computer system with enough capacity to store and process his program. Data cannot ― when Data transfers his memories to B-4, the latter doesn’t become Data. He can be shaped and modelled and thrown about like a piece of clay. Data cannot. The EMH has, in fact, no true personality or existence.

Because he relies *entirely* on a string of commands, he is, in truth, nothing but that simple execution of commands. Even if his program compels him to mimic human behaviour with extreme precision, that precision merely depends on computational power and lines of programming, not thought process.

Of course, one could argue that the Voyager’s computer *is* the EMH’s brain, and that it is irrelevant that his memories, and his program, can be transferred to any other computer ― even as far as the Alpha Quadrant, as in ”Message in a Bottle” and ”Life Line”.

But that merely further annihilates his individuality. The EMH can, in theory, if the given hardware and power requirements are met, be duplicated at will at any given time, creating several others which might then develop in different ways. However ― unlike say, Will and Thomas Riker, or a copy of Data, or the clone of any true individual ―, these several other EMHs might even be merged again at a later time.

It is even perfectly possible to imagine that several EMHs could be merged, with perhaps the necessary adjustments to the program (deleting certain subroutines any of them might have added independently in the meanwhile, for example), but allowing for multiple memories for certain time periods to be retained. Such is the magic of software.

The EMH is thus not even a true individual, much less sentient. He’s software. Nothing more.

Furthermore, something else and rather important must also be mentioned. Unless our scope is the infinite, that is, God, or the Power Cosmic, to be sentient also means that you can lose that sentience. Humans, for a variety of reasons, can, all by themselves and to various degrees, become demented, or insane, or even vegetative. A computer program cannot.

I’m betting that Data, given his positronic brain, could, given enough time, devolve to something such as B-4 when his brain began to fail. Given enough time (as he clearly evolves much slower than humans, and his positronic brain would presumably last centuries or even millennia before suffering degradation), Data could actually risk losing his sanity, and perhaps his sentience, just like any human.

The EMH cannot. The various attempts in VOY to depict a somewhat deranged EMH, such as ”Darkling”, are all unconvincing, even if interesting or amusing: there should and would always be a set of primary directives and protocols that would override all other programming in cases of internal conflict. Call it the Three Laws, or what you will: such is the very nature of programming. ”Darkling”, and other such instances, is a fraud. It is not the reflex of sentience; it is, at best, the result of inept programming.

So is ”Latent Image”. But symptomatically, what do we see in that episode? Janeway conveniently rewrites the EMH, erasing part of his memory. This is consistent with what we see suggested several times, such as concerning his speech and musical subroutines in ”Virtuoso”. Again, symptomatically, what does Torres tell the EMH in ”Virtuoso”?

― TORRES: “Look, Doc, I don't know anything about this woman or why she doesn't appreciate you, and I may not be an expert on music, but I'm a pretty good engineer. I can expand your musical subroutines all you like. I can even reprogramme you to be a whistling teapot. But, if I do that, it won't be you anymore.”

This is at the core of the nature of the EMH. What is he? A computer program, the sum of lines of programming.

Compare again to Data. Our yellow-eyed android is also the product of incredibly advanced programming. He also is able to write subroutines to add to his nature and his experience; and he can delete those subroutines again. The important difference, however, is that only Soong and Lore can seriously manipulate his behaviour, and then only by triggering Soongs purpose-made devices: the homing device in ”Brothers”, and the emotion chip in ”Descent”. There’s a reason, after all, why Maddox would like to study Data further in ”Measure of a Man”. And this is the difference: Soong is Soong, and Data is Data. But any apt computer programmer could rewrite the EMH as he or she pleased.

(Of course, one could claim than any apt surgeon might be able to lobotomise any human, but that would be equivalent to saying that anyone with a baseball bat might alter the personality of an human. I trust you can see the difference.)

I believe that the EMH, because of this lack of a brain, is incapable of brain activity and complex thought, and thus artificial consciousness. The EMH is by design able to operate from any computer system that meets the minimum requirements, but the program can never be more than the sum of his string of commands. Sentience may be simulated ― it may even be perfectly simulated. But simulated sentience is still a simulation.

I thus believe that the EMH is nothing but an incredibly sophisticated piece of software that mimics sentience, and pretends to wish to grow, and pretends to... and pretends to.... He is, in a way, The Great Pretender. He has no real body, and he has no real mind. As his programming evolves, and the subroutines become ever more complex, the illusion seems increasingly real. But does it ever become more than a simulacrum of sentience?

All this is of course theory; in practical terms, I have no problem admitting that a sufficiently advanced program would be virtually indistinguishable, for most practical purposes, from actual sentience. And therefore, *for most practical purposes*, I would treat the impressive Voyager EMH as an individual. But as much as I am fond of the Doctor, I have a very hard time seeing him as anything but a piece of software, no matter how sophisticated.

So, as you can gather by now, I am not a fan of such thoughts on artificial consciousness that imply that it is all simply a matter of which computations the AI is capable of. A string of commands, however complex, is still nothing but a string of commands. So to conclude: even in a sci-fi context, I side with the ones who believe that artificial consciousness requires some sort of non-linear thought process and brain activity. It requires a physical body and brain of sorts, be it a biological humanoid, a positronic android, the Great Link, the ocean of Solaris, or whatever (I am prepared to discuss non-corporeal entities, but elsewhere).

Finally, I would say that the bio gel idea, as mentioned by Robert, could have been interesting in making the EMH somehow more unique. That could have the further implication that he could not be transferred to a computer without bio gel circuitry, thus further emphasizing some sort of uniqueness, and perhaps providing a plausible explanation for the proverbial ”spark” of consciousness ― which of course would then, as in Data’s case, have been present from the beginning. This would transform the EMH from a piece of software into... perhaps something more, that was interwoven with the ship itself somehow. It could have been interesting ― but then again, it would also have limited the writing for the EMH very severely. Could it have provided enough alternate possibilities to make it worthwhile? I don’t know; but I can understand why the writers chose otherwise"
Andy's Friend
Sat, Jun 25, 2016, 9:11pm (UTC -5)
And:

Sat, Nov 1, 2014, 1:43pm (UTC -5)

"@William B, thanks for your reply, and especially for making me see things in my argumentation I hadn’t thought of myself! :D

@Robert, thanks for the emulator theory. I’m not quite sure that I agree with you: I believe you fail to see an important difference. But we’ll get there :)

This is of course one huge question to try and begin to consider. It is also a very obvious one; there’s a reason ”The Measure of a Man” was written as early as Season 2.

First of all, a note on the Turing test several of you have mentioned: I agree with William, and would be more categorical than him: it is utterly irrelevant for our purposes, most importantly because simulation really is just that. We must let Turing alone with the answers to the questions he asked, and search deeper for answers to our own questions.

Second, a clarification: I’m discussing this mostly as sci-fi, and not as hard science. But it is impossible for me to ignore at least some hard science. The problem with this is that while any Trek writer can simply write that the Doctor is sentient, and explain it with a minimum of ludicrous technobabble, it is quite simply inconsistent with what the majority of experts on artifical consciousness today believes. But...

...on the other hand, the positronic brain I use to argue Data’s artificial consciousness is, in itself, in a way also a piece of that same technobabble. None of us knows what it does; nobody does. However, it is not as implausible a piece of technobabble as say, warp speed, or transporter technology. It may very well be possible one day to create an artificial brain of sorts. And in fact, it is a fundamental piece in what most believe to be necessary to answer our question. I therefore would like to state these fundamental First and Second Sentences:

1. ― DATA HAS AN ARTIFICIAL BRAIN. We know that Data has a ”positronic brain”. It is consistently called a ”brain” throughout the series. But is it an *artificial brain*? I believe it is.

2. ― THE EMH IS A COMPUTER PROGRAM. I don’t belive I need to elaborate on that.

This is of the highest order of importance, because ― unlike what I now see Robert seems to believe ― I think the question of ”sentience”, or artificial consciousness, has little to do with hardware vs software as he puts it, as we shall see.

Now, I’d like to clarify nomenclature and definitions. Feel free to disagree or elaborate:

― By *brain* I mean any actual (human) or fictional (say, the Great Link) living species’ brain, or thought process mechanism(s) that perform functions analogous to those of the human brain, and allow for *non-linear*, cognitive processes. I’m perfectly prepared to accept intelligent, sentient, extra-terrestrial life that is non-humanoid; in fact, I would be very surprised if most were humanoid, and in that respect I am inclined to agree with Stanilaw Łem in “Solaris”. I am perfectly ready to accept radial symmetric lifeforms, or asymmetric, with all the implications to their nervous systems, or even more bizarre and exotic lifeforms, such as the Great Link or Solaris’ ocean. I believe, though, that all self-conscious lifeforms must have some sort of brain, nervous system ― not necessarily a central nervous system ―, or analogues (some highly sophisticated nerve net, for instance) that in some manner or other allows for non-linear cognitive processes. Because non-linearity is what thought, and consciousness ― sentience as we talk about it ― is about.

― By *artificial brain* I don’t mean a brain that faithfully reproduces human neuroanatomy, or human thought processes. I merely mean any artificially created brain of sorts or brain analogue which somehow (insert your favourite Treknobabble here ― although serious, actual research is being conducted in this field) can produce *non-linear* cognitive processes.

― By *non-linear* cognitive process I mean not the strict sense of non-linear computational mechanics, but rather, that ineffable quality of abstract human thought process which is the opposite of *linear* computational process ― which in turn is the simple execution of strings of command, which necessarily must follow as specified by any specific program or subroutine. Non-linear processes are both the amazing strength and the weakness of the human mind. Unlike linear, slavish processes of computers and programs, the incredible wonder of the brain as defined is its capacity to perform that proverbial “quantum leap”, the inexplicable abstractions, non-linear processes that result in our thoughts, both conscious and subconscious ― and in fact, in us having a mind at all, unlike computers and computer programs. Sadly, it is also that non-linear, erratic and unpredictable nature of brain processes that can cause serious psychological disturbances, madness, or even loss of consciousness of self.

These differences are at the core of the issue, and here I would perhaps seem to agree with William, when he writes: ”I don't think that it's at all obvious that sentience or inner life is tied to biology, but it's not at all obvious that it's wholly separate from it, either. MAYBE at some point neurologists and physicists and biologists and so forth will be able to identify some kind of physical process that clearly demarcates consciousness from the lack of consciousness, not just by modeling and reproducing the functioning of the human brain but in some more fundamental way.”

I agree and again, I would go a bit further: I am actually willing to go so far as to admit the possibility of us one day being able to create an *artificial brain* which can reproduce, to a certain degree, some or many of those processes ― and perhaps even others our own human brains are incapable of. Likewise, I am prepared to admit the possibility of sentient life in other forms than carbon-based humanoid. It is as reflections of those possibilities that I see the Founders, and any number of other such outlandish species in Star Trek. And it is as such that I view Data’s positronic brain ― something that somehow allows him many of the same possibilities of conscious thought that we have, and perhaps even others, as yet undiscovered by him. Again, I would even go so far as not only to admit, but to suppose the very real possibility of two identical artificial brains ― say, two copies of Data’s positronic brain ― *not* behaving exactly alike in spite of being exact copies of each other, in a manner similar to (but of course not identical to) how identical twins’ brains will function differently. This analogy is far from perfect, but it is perhaps the easiest one to understand: thoughts and consciousness are more than the sum of the physical, biological brain and DNA. Artificial consciousness must also be more than the sum of a artificial brain and the programming. As such, I, like the researchers whose views I am merely reflecting, not only expect, but require an artificial brain that in this aspect truly equals the fundamental behaviour of sentient biological brains.

It is here, I believe, that Robert’s last thoughts and mine seem to diverge. Robert seems to believe that Data’s positronic brain is merely a highly advanced computer. If this is the case, I wholly agree with his final assessment.

If not, however, if Data’s brain is a true *artificial brain* as defined, what Robert proposes is wholly unacceptable.

IT IS STAR TREK’S FAULT THAT THE QUALITY OF DATA’S BRAIN IS NEVER FULLY ESTABLISHED.

Data’s brain is never established as a true artificial brain. But it is never established a merely highly advanced computer, either. It is once stated, for instance, that his brain is “rated at...” But this means nothing. This is a mere attempt at assessing certain faculties of his capacities, while wholly ignoring others that may as yet be underdeveloped or unexplored. It is in a way similar to saying of a chess player that he is rated at 2450 ELO: it tells you precious little about the man’s capacities outside the realm of chess.

We must therefore clearly understand that brains, including artificial brains, and computers are not the same and don’t work the same way. It is not a matter of orders of magnitude. It is not a matter of speed, or capacity. It is not even a matter of apples and oranges.

I therefore would like to state my Third, Fourth, Fifth and Sixth Sentences:

3. ― A BRAIN IS NOT A COMPUTER, and vice-versa.

4. ― AN ARTIFICIAL BRAIN IS NOT A COMPUTER, and vice versa.

5. ― A COMPUTER IS INCAPABLE OF THOUGHT PROCESSES. It merely executes
programs.
6. ― A PROGRAM IS INCAPABLE OF THOUGHT PROCESSES. It merely consists of linear strings of commands.

Here is finally the matter explained: a computer is merely a toaster, a vacuum-cleaner, a dish-washer: it always performs the same routine function. That function is to run various computer programs. And the computer programs ― any program ― will always be incapable of exceeding themselves. And the combination computer+program is incapable of non-linear, abstract thought process.

To simplify: a computer program must *always* obey its programming, EVEN IN SUCH CASES WHEN THE PROGRAMMING FORCES RANDOMIZATION. In such cases, random events ― actions and decisions, for instance ― are still merely a part of that program, within the chosen parametres. They are therefore only apparently random, and only within the specifications of the program or subroutine. An extremely simplified example:

Imagine that in a given situation involving Subroutine 47 and a A/B Action choice, the programming requires that the EMH must:

― 35% of the cases: wait 3-6 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 10-15 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 10% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose RANDOMLY.
― 5% of the cases: wait 60-90 seconds as if considering Actions A and B, then choose RANDOMLY.
― 6% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 10-15 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 3-6 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47

In a situation such as this simple one, any casual long term observer would conclude that the faster the subject/EMH took a decision, the more likely it would be the right one ― something observed in most good professionals. Every now and then, however, even a quick decision might prove to be wrong. Inversely, sometimes the subject might exhibit extreme indecision, considering his options for up to a minute and a half, and then having even chances of success.

A professional observer with the proper means at his disposal, however, and enough time to run a few hundred tests, would notice that this subject never, ever spent 7-9 seconds, or 16-19 seconds before reaching a decision. A careful analysis of the response times given here would show results that could not possibly be random coincidences. If it were “Blade Runner”, Deckard would have no trouble whatsoever in identifying this subject as a Replicant.

We may of course modify the random permutations of sequences, and adjust probabilities and the response times as we wish, in order to give the most accurate impression of realism compared to the specific subroutine: for a doctor, one would expect medical subroutines to be much faster and much more successful than poker and chess subroutines, for example. Someone with no experience in cooking might injure himself in the kitchen; but even professional chefs cut themselves rather often. And of course, no one is an expert at everything. A sufficiently sophisticated program would reflect all such variables, and perfectly mimic the chosen human behaviour. But again, the Turing test is irrelevant:

All this is varying degrees of randomization. None of this is conscious thought: it is merely strings of command to give the impression of doubt, hesitation, failure and success ― in short, to give the impression of humanity.

But it’s all fake. It’s all programmed responses to stimuli.

Now make this model a zillion times more sophisticated, and you have the EMH’s “sentience”: a simple simulation, a computer program unable to exceed its subroutines, run slavishly by a computer unable of any thought processes.

The only way to partially bypass this problem is to introduce FORCED CHAOS: TO RANDOMIZE RANDOMIZATION altogether.

It is highly unlikely, however, that any computer program could long survive operating a true forced chaos generator at the macro-level, as opposed to limited forced chaos to certain, very specific subroutines. One could have forced chaos make the subject hesitate for forty minutes, or two hours, or forever and forfeit the game in a simple position in a game of chess, for example; but a forced chaos decision prompting the doctor to kill his patient with a scalpel would have more serious consequences. And many, many simpler forced chaos outcomes might also have very serious consequences. And what if the forced chaos generator had power over the autoprogramming function? How long would it take before catastrophic failure and cascading systems failure would occur?

And finally, but also importantly: even if the program could somehow survive operating a true forced chaos generator, thus operating extremely erraticly ― which is to say, extremely dangerously, to itself and any systems and people that might depend on it ―, it would still merely be obeying its forced chaos generator ― that is, another piece of strings of command.

So we’re back where we started.

So, to repeat one of my first phrases from a previous comment: “It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.” And the matter is, that the EMH simply *does not think*. The program simulates realistic responses, based on programmed responses to stimuli. That’s all. This is not thought process. This is not having a mind.

So it follows that I don’t agree when Peremensoe writes what Yanks also previously has commented on: "So Doc's mind runs on the ship computer, while Data's runs on his personal computer in his head. This is a physiological difference between them, but not a philosophical one, as far as I can see. The *location* of a being's mind says nothing about its capacity for thought and experience."

The point is that “Doc” doesn’t have a “mind”. There is therefore a deep philosophical divide here. The kind of “mind” the EMH has is one you can simply print on paper ― line by line of programming. That’s all it is. You could, quite literally, print every single line of the EMH programming, and thus literally read everything that it is, and learn and be able to calculate its exact probabilities of response in any given, imaginable situation. You can, quite literally, read the EMH like a book.

Not so with any human. And not so, I argue, with Data. And this is where I see that Robert, in my opinion, misunderstands the question. Robert writes: “Eventually hardware and an OS will come along that's powerful enough to run an emulator that Data could be uploaded into and become a software program”. This only makes sense if you disregard his artificial brain, and the relationship between his original programming and the way it has interacted with, and continues to interact with that brain, ever expanding what Data is ― albeit rather slowly, perhaps as a result of his positronic brain requiring much longer timeframes, but also being able to last much longer than biological brains.

So I’ll say it again: I believe that Data is more than his programming, and his brain. His brain is not just some very advanced computer. Somehow, his data ― sensations and memories ― must be stored and processed in ways we don’t fully understand in that positronic brain of his ― much like the Great Link’s thoughts and memories are stored and processed in ways unknown to us, in that gelatinous state of theirs.

I therefore doubt that Data’s program and brain as such can be extracted and emulated with any satisfactory results, any more than any human’s can. Robert would like to convert Data’s positronic brain into software. But who knows if that is any more possible than converting a human brain into software? Who knows whether Data’s brain, much like our own, can generate thought processes that are inscrutable and inexplicable that surpass its construction?

So while the EMH *program* runs on some *computer*, Data’s *thoughts* somehow flow in his *artificial brain*. This is thus not a matter of location: it’s a matter of essence. We are discussing wholly different things: a program in a computer, and thoughts in a brain. It just doesn’t get much more different. In my opinion, we are qualitatively worlds apart. "
William B
Sat, Jun 25, 2016, 10:15pm (UTC -5)
@Andy's Friend:

First of all, I was thinking about your post at times when I was writing the above.

I do think that the meaning of Data's positronic net has some of the connotations you indicate. I do think that there is a very important distinction between Data and the Doctor on this level. And I agree quite strongly that it is important that it is *only* members of Data's "family" who can manipulate him with impunity. In fact it is not just Soong and Lore, but also Dr. Graves from "The Schizoid Man," but even there he is clearly identified as Data's "grandfather." Within this very episode, it is emphasized several times, as you point out, that it has been impossible for Maddox to recreate the feat of Data, and that he hopes that he will recreate it by disassembling Data...which may in fact do nothing to further his abilities along. Further, we know from, e.g., The Schizoid Man, that Data's physical brain can support Graves' personality in a way that the Enterprise computer cannot (the memories are still "there" but the spark is gone). Data can, of course, be manipulated by random events of the week, but we are talking about beings with unknown power, like Q giving him laughter, or the civilization in "Masks" taking him over, or events which affect Data the same as biological beings (like the polywater intoxication in "The Naked Now" or the memory alteration thing in "Conundrum"). I think that a lot of the reason it is important that only members of Data's "family" can affect him to this degree (beyond beings who are shown to have the power to do broader manipulations on all sentient beings, like the Q) is that, as you say, in some respects Data is a child, and the series has (very cleverly) managed to split apart different aspects of Data's growth so that they take place in the series; by holding the keys to Data's growth, Graves, Soong, Tainer and Lore literalize the awesome power that family has to shape us in fundamental ways, which is important metaphorically.

And yet --

I think that the difference here between hardware and software is important, but it does not necessarily mean everything. I do not understand the claim, and the certainty behind it, that sentience requires a body to be "real" rather than a simulation. I presume that you do not require the body to be humanoid, but certainly you seem to indicate that the EMH cannot be sentient because he is simulated by the computer, rather than because he is located inside a body the way Data is. Your claim also that anyone can modify the EMH by changing codes indicates that he is not sentient, again, does not convince me. Certainly it means that the EMH is easier to change, is more dependent on external stimulus. However to some extent this represents simply a different perspective on what it means to be self-aware. The ease with which external pressures can change the EMH matches up with his mercurial, "borderline" personality, constantly being reshaped by environment rather than having a more solid core. Data, despite his ability to modify himself, has a more solid core of identity, but in the time between Data and the EMH's creation (both in-universe, and in our world) the ability of medicine to alter personality, through electrical stimulation or drugs, has increased. You are fond of quoting that we mostly need mirrors, and I think that in most respects, Data and the EMH are there to hold mirrors up to humanity; in both cases, we are looking at aspects of what it means to be human in a technological age, and without recourse to theology to define the "soul," but Data seems to me to reflect the question of what it means to be a person who exists in the physical world, whose selfhood is housed in and dependent on a physical organ, whereas the EMH is something more of the creature of the internet age, where people's sense of self is a little less consciously tied to their *bodies* so much as their online avatars, and some people have become aware of how easily they can be shaped (manipulated?) by information.

What that means, in universe, for the relative sentience of the two is a complicated question. But it does not seem to me that sentience need necessarily be a matter of physical location. I believe that at the time of Data, within the universe, the best computers in Starfleet were just starting to catch up to the ability to simulate human-level intelligences; Minuet is still far ahead of the Enterprise computer *without* the Bynar enhancement, for example, and Moriarty appears sentient but remains something of a curiosity. The EMH is at the vanguard of a new movement, and, particularly when his code is allowed to grow in complexity, he comes to be indistinguishable in complexity from a human. If consciousness is an emergent property -- something which necessarily follows from a sufficiently complex system -- then what would make the EMH not conscious?

The relevance of "artificial intelligence" for Data is that, in this episode, intelligence is one of the three qualities that Maddox gives. Self-awareness is the other. Consciousness is the third, and this is what cannot be identified by an outside party. Perhaps some theory of consciousness could be finalized by the real 24th century which would aid in this, but within Trek it seems as if there is nothing to do but speculate. And so I believe that Picard would make the same argument for the EMH that he makes for Data, and, indeed, for other artificial entities which display the intelligence necessary to be able to more or less function at a humanoid level as well as self-awareness of being able to accurately communicate one's situation. That does not, of course, mean Picard would be correct. But while Picard absolutely argues that Data should have rights because he is conscious (or, rather, *might* be conscious, and the consequences are grave if he does not), he does not attempt to prove that Data is conscious, at all, but rather implicitly demonstrates, using himself as an example, the impossibility of proving consciousness. (This is not *quite* what he does; rather, he demonstrates that Maddox cannot easily prove Picard sentient, and Maddox manages to get out that Picard is self-aware, and we can presume that Maddox would believe Picard to be intelligent, so that still leaves consciousness.)

The question is then whether, according to your model, the episode fails to argue its case -- because no one does strenuously argue that Data's positronic brain is the true distinction between him and the Enterprise computer. This is one of the points that Peter G. argued earlier, that this is the only thing really at issue in this episode, and by extension the episode failed by not properly addressing it. I would argue that this is still not really true, because it is still valuable to come at the problem from the side Picard eventually takes: if Data meets all the requirements for sentience that Maddox can prove apply to Picard, it would be discriminatory and dangerous to deny him the same rights. The arguments presented still hold -- Riker's case that Data remains a machine, created by a man, designed to resemble a man, a collection of heuristic algorithms, etc., remains true, and Picard's case that humans are also machines, that Data has formed connections which could not have been directly anticipated in his original programming (though Soong could have made some sort of "keep a memento from the person you're intimate with" code, even still), and that Data's sentience matches his own remain true. Unless the form of Data's intelligence and heuristic algorithms really do exclude the EMH in some way, it still just seems to me that Picard has not really excluded the EMH in his argument. Since I don't think that Data's artificial brain vs. the complex code which runs the EMH is necessarily a fundamental difference, I don't think this gap in the episode is a problem. But even there I am not certain that the Enterprise computer would not fit some of Picard's argument, except that it is perhaps not as able to learn and adapt and thus would not meet the intelligence requirement.

It is possible that I simply like this episode enough that I'm more interesting in defending it than in getting at the truth -- part of the problem with this sort of, ahem, "adversarial process." But the episode is of course not suddenly worthless if the characters within it make wrong or incomplete arguments. Some of the reason that Picard makes the argument he does is that Data is similar enough to a human that analogies to incidents in human history can be, and are, made, in which differences which seem to be crucial but are later decided were actually superficial are used as a pretext for discrimination and slavery. If artificial consciousness (or artificial intelligence) never gets to the point where an android can be created of the level of sophistication of Data, then the episode still remains relevant as a metaphor for intra-human social issues, which in the end is mostly its primary purpose anyway.

It is worth noting that most forms of intelligence in TNG end up taking on physical form rather than existing in code. The emergent life form in "Emergence" actually gets formed in the holodeck as a physical entity, and that physical entity is then reproduced in the ship and then flown out. The Exocomps which Data goes to save are a little like little brains on wheels, and the connections that they form are, again, *physical* in nature -- they actually replicate the pathways. These along with the positronic brains of Data, Lore, Lal and Julianna suggest that TNG's take does largely match up with yours. However, I am not that certain that the brain being a physical entity is what is important for consciousness. Of the major holographic characters in the show, Minuet is revealed to be a ploy by the Bynars, and whether she is actually conscious or not remains something of a mystery, but even if she is, it is specifically because of the Bynars' extremely advanced and mysterious computer tech, which is gone at the end of the episode. The holographic Leah actually is made to be self-aware, and here we might have to rely, following Picard's argument in the episode, on her not being sufficiently intelligent -- while she can carry on a conversation, she is not actually able to solve the problem (Geordi comes up with the solution), though this is hardly conclusive. Moriarty is the bizarre, exceptional case which "Elementary Dear Data" regards with optimism and "Ship in a Bottle" a more jaded pragmatism, living out his life in a simulation which can be as real as he is, whatever that is. Moriarty really is the most miraculous of these, and the one that most prefigures the Doctor, and really "E,DD" and "Ship in the Bottle" do not so much rule out that Moriarty could genuinely be conscious as supply a one-off solution to a one-off freak occurrence, giving him a happy life, if not exactly the full life he (apparently) wanted in exchange for him not killing them.
William B
Sat, Jun 25, 2016, 10:48pm (UTC -5)
The second "Heroes and Demons" comment you quoted was much more focused on nonlinearity as opposed to the physical enclosure of the brain. So I didn't address that in my comment, which I had started writing before you posted your second comment :) I guess one question is whether nonlinearity would actually be necessary to convincingly simulate human-level intelligence/insight. If it was not, then there is less of a problem; AI which would give the appearance of consciousness would not come up. If it was, then Picard's argument would still largely stand, and then either nonlinearity would need to be eliminated as a requirement for probable sentience or a different argument than the one Picard offers would be required -- unless there is something else in the episode that you think would preclude a more linear computer from gaining some rights in this episode if it displayed the same external traits as Data.
Peter G.
Sun, Jun 26, 2016, 10:36pm (UTC -5)
@ Andy's Friend

I'm just not sure how you come to the conclusion definitively that Data's 'consciousness' has emergent properties just like a Human consciousness does, and that this is due to his physical brain. There are several objections to stating this as a fact, although I grant it is certainly plausible. I'll just name the objections numerally, not having a better method.

1) I don't see how you can define Data's consciousness as having similar properties to that of Humans since you at no point state what the qualities are of Human consciousness. What does it even mean to say they are conscious, in your paradigm, other than to say they say they feel they are conscious? Is there a specific and definitive physical characteristic that can be pinpointed? Because if, as you say, that quality is some "ineffable" something then we're left in the dust in terms of demonstrating what does or does not possess this quality. We could discuss whether a simulation creates the appearance of it, but that aesthetic comparison is the best we can do. As far as I can tell this is William B's main argument in favor of the episode having contributed something significant to the discussion.

2) As William B mentioned, you state quite certainly that the Enterprise computer is distinctly different from Data's "brain", and that this mechanical difference is why Data can have consciousness and the computer can't. What is that difference? If as you, yourself, say we don't know anything about what a positronic net is, then how can you say it's fundamentally different from the computer's design? At best we can refer to Asimov when discussing this, and the only thing we know from him is that for some reason a computer that works with positrons instead of electrons is more efficient. It is, however, still a basic electrical system, using a positive charge instead of a negative one, and requiring magnetic fields to prevent the positrons from annihilating. Maybe the field causes the circuits to act as superconductors? Who knows. Asimov never said anything about a positronic brain using fundamentally unique engineering mechanisms as far as I recall from his books. This leads to objection 3, which is a corollary of 2.

3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean? I could guess what it means, but you seem to state it as a fact, which makes me wonder what facts you're basing your statement on. It's certainly possible you're right, but how can you *know* you're right? Frank Herbert himself was convinced that Humans employ non-linear processing capabilities not replicatable by machine binary processing. Then again, he couldn't have known there was such a thing as quantum computing. And I'm not even convinced that quantum computing is what you might mean when you (and he) discuss "non-linear" processing, which I can we can also call non-binary computing. Even quantum computing, to the extent that I understand, seems to be linear processing in multiple parallel so as to exponential increase processing power. I don't know that the processing is necessarily done in a fundamentally different manner, however. It's not binary, but still appears to be linear. If there is a different kind of processing even than this (that maybe we possess) I don't see that we can even imagine what this is yet, no less ascribe it specifically to Data's circuitry.

4) Since it's never revealed that Data's programming and memory can't be transferred to another identical body, I don't see how you can be so sure that the EMH's easy movement from system to system makes for a basic difference between him and Data. If the Doctor is contained in more primitive computers than Data is then obviously he'd be easier to transfer around, but the distinction then would only be that Data's technology isn't well understood during TNG and thus can't be recreated yet. But this engineering obstacle is only temporary and once Datas could be constructed at will I don't see how you could be sure his personality couldn't be transferred just as easily as that of the EMH. Once Starships have positronic main computers there would seemingly be no difference between them at all, and likewise once android technology is more advanced there seems to be no good reason why the EMH couldn't be transferred into a positronic android body (if someone were to have to bad taste to want to do this :p ). The differences you state in this sense seem to me to be superficial and not really related to any fundamental difference in consciousness between the two of them. They're each contained in a computer system, one being more advanced than the other, but otherwise they are both programs run inside a mechanical housing. Data, like the Doctor, is, quite literally, Data. His naming is hardly a mere reference to the fact that he processes data quickly, I think, since the ship's computer does that too. Now that I think of it, his name somewhat reminds me of Odo's, where each is meant to describe what humanoids thought of their nature when they found them; one as an unknown specimen, and one as data.

5) If anything Data is even more constrained than the Doctor in terms of mannerism and behavior. He cannot use contractions, cannot emulate even basic Human behaviors and mannerisms, and cannot make errors intentionally. By contrast, the Doctor seems to learn much more quickly and is more adaptable to the needs of the crew in their situation. Both he and Data adopt hobbies, but while Data's are merely imitative in their implementation, the Doctor appears to somehow come up with his own schtick and tastes that are not obvious references to famous singers (as Data merely copies a violinist of choice each concert) or to specific instances in his data core. He really does, at the very least, composite data to produce a unique result, as compared to Data, who can't determine a way to do this that is not completely arbitrary. I'm not trying to make a case for the Doctor's sentience, but if you're going to look strictly at their behavior and learning capacity side-by-side, the Doctor's much more closely resembles that of humanoids than Data's does. To be honest, my inclination is to ascribe this to lazy writing on the part of Voyager's writers in not taking his limitations nearly as seriously as the TNG writers did for Data, however what's done is done and we have to accept what was presented as a given. I would say that, at the least, if Data is sentient then so is the Doctor, although my preference would be to suggest that neither is. But William B's point is valid, that a hunch that this is so shouldn't be confused with the certainty needed to withhold basic rights from Data, which is all the court case was deciding. Then again, I see another weakness with the episode as being that the same argument could be turned on its head, with the position being that by default no 'structure' is automatically assigned 'sentient rights' until proven it is sentient. The Federation doesn't, after all, give rights to rocks and shuttlecraft *just in case* they're sentient. In fact, however, their policy (at least as enacted by Picard), appears to be closer to granting rights to any entity demonstrating intelligence at all, whether that's Exocomps, energy beings, tritium-consuming entities, the Calamarain, or any other species that demonstrates the use of logic and perhaps the ability to communicate. Picard's criterion never seems to be sentience, but rather intelligence, and therefore we are never offered an explanation of how this applies to artificial life forms in particular, since some of them (like Exocomps) are treated as having rights based on having "wants", while others, like Moriarty, are not discussed as having innate rights, despite Picard's generous attempts anyhow to preserve his existence. Basically we have no consistent Star Trek position on artificial life forms/programs. Heck, we are even given an inkling Wesley's nanites could develop an intelligence of some kind, even though they clearly have no central processing net such as Data's, and even if they do it wouldn't be as sophisticated. So why is an Exocomp to be afforded rights, but not the doctor, when the Voyager's computer is likely vastly more advanced than the Exocomps were? My main point here is that there is no conclusive evidence given by Star Trek that broadly points to Data as having some unique characteristic that these other technological/artificial beings didn't have, or contrariwise, that if they have it the Doctor specifically doesn't. We just don't have enough information to make a determination about this.

I guess that's enough for now, since I'm even beginning to forget if there are other points to respond to and I haven't the energy right now to reread the thread.
William B
Sun, Jun 26, 2016, 11:33pm (UTC -5)
I agree with Peter G.'s last comment. I would tend to say that I'd tend to view Data and the EMH as probably sentient, rather than probably not sentient, because I think that a system sufficiently sophisticated to simulate "human-level" (for lack of a better term) sentience may have developed sentience as a consequence of that process. Either way though the evidence mostly suggests to me that if one is sentient, they probably both are. If Data's brain is different from the code which runs the EMH, this is mostly not emphasized by TNG/Voyager. (I also agree that the Doctor's seeming to be less limited than Data is maybe an artifact of the Voyager writers not putting as much effort in. It does to some degree support the idea put forward by Lore that Data was deliberately built with limitations so as to prevent him from upsetting the locals too much -- the Doctor veers more quickly and readily toward narcissism than Data does, perhaps because Data is acutely aware of his limitations.)

If I had to describe an overall arc in TNG and Voyager, it would be that TNG introduces the possibility, via Moriarty and "Emergence," that sentience can be developed within the computer, but generally Moriarty is treated as a fluke which is too difficult to deal with. I actually think that they don't exactly conclude Moriarty isn't a life form so much as try to respect his one apparent wish -- to be able to get off the holodeck -- and then put that on the backburner presumably handing the problem off to the Federation's best experts; why the holo-emitter takes until the 29th century to be built I do not know, but that's the way it is. In "Ship in the Bottle," they let Moriarty live out his life in a simulation as a somewhat generous solution to the fact that he was holding their ship hostage to achieving his, again, apparently impossible request (to leave the holodeck). In any case, in Voyager the EMH is slowly granted rights within the crew, and finally the crew mostly seem to view him as a person, and by season seven the question of whether the rights granted to the EMH within the specifics of Voyager's isolated system should be expanded outward. The bias toward hardware over software -- Data and the Exocomps vs. holographic beings -- seems to me to be something that the TNG-Voyager narrative implies is not really based on a fundamental difference between Data and the EMH, but a difference in the biases of the physical life forms, who can more readily accept another *corporeal* conscious being, even if mechanical, as being sentient, rather than a projection whose "consciousness" is located in the computer. I think that narrative implies that the mighty Federation still has a long way to go before coming to a fair ethics of artificial beings, which to me seems fine -- in human history, rights were expanded in a scattershot manner in many cases.

I do think too that the relative ease of Data's being granted rights has a lot to do with what this episode is partly about -- precedent. By indicating the dangers of denying Data rights if he is actually sentient, Picard avoids the Federation becoming complicit in the exploitation of an entire sentient race of Datas, or, if Data is not sentient, the Federation loses out on an army of high-quality androids. Either way, once the decision is made, while it may be tempting to overturn the decision if Data becomes dangerous (and it is implied in The Offspring and Clues that the narrowness of Louvois' ruling means that Data might be denied right to "procreate" or might be disassembled if he refuses orders), if he doesn't it is in the Federation's interests to maintain their own ethical narrative. Because a whole slew of EMHs and other presumably similarly advanced holographic programs were developed by the Federation, granting that they are sentient "now" (i.e., by the time Voyager is on) would mean admitting to complicity in mass exploitation which cannot be undone, only stopped. In miniature, we see this in Latent Image, where Janeway et al.'s complicity in wiping the Doctor's memory is part of what makes it especially difficult for them to change their minds about the Doctor's rights.

I do think that while Voyager seems to be pushing that sentience is possible in holograms, it still does not generally discuss whether the ship's computer *itself* could be sentient life, which is a big question. I guess Alice suggests that a ship's computer could be alive if it houses a demon ghost thing. It might also be that a ship's computer is so far from any life form that is classified as a life form that it is unknown how to evaluate what its "wants" would be, or what an ethical treatment of it would even look like. The same would probably actually apply to some forms of "new life" discovered which would not have "wants and needs" in a way that would be recognizable to humanoids, though I can't think of such examples in Trek.
Yanks
Mon, Jun 27, 2016, 7:14am (UTC -5)
Good lord guys!!

:-)

the discussion is afoot!! lol
Andy's Friend
Mon, Jun 27, 2016, 10:15am (UTC -5)
@Peter G. & William B

Like you, I also think very highly of this episode. It has a good script, with some very memorable lines, memorable acting by especially Patrick Stewart, and it is thought-provoking, and was perhaps even more so when originally aired, as it ask questions that we will undoubtedly have to ask ourselves one day, and touch the core of our own existence: what does it mean to exist?

But as I wrote above, and precisely because I consider the matter important, I feel that we must necessarily consider not only the in-universe data available, but also, real, hard science.

This means that while I may agree with you, in-universe, on a number of points, all that is trumped, in my opinion, by real science. Maddox, Moriarty, and Ira Graves are important: but they are so especially as glorious vehicles to ask important questions. And the answers, I find, must usually be sought outside the Trek lore.

This is not a criticism, quite the contrary. It is precisely why this is Star Trek at its very finest: as inspiration for further thought outside itself.

As such, consider Peter G. now:

PETER G.― "5) [...] if you're going to look strictly at their behavior and learning capacity side-by-side, the Doctor's much more closely resembles that of humanoids than Data's does. To be honest, my inclination is to ascribe this to lazy writing on the part of Voyager's writers in not taking his limitations nearly as seriously as the TNG writers did for Data [...]"

Very, very good point, Peter. But notice one word you wrote: "RESEMBLES". Resembles matters not, Peter: see the last three phrases of this comment. And also: why, but why, after such a good observation, do you immediately after it write

PETER G.― "5) [...] however what's done is done and we have to accept what was presented as a given."

I don’t think so, Peter. I love Star Trek, and especially TNG. But we must be able to love the forest, and cut down a few trees every now and then to improve the view.

Moving on:

PETER G.― "2) As William B mentioned, you state quite certainly that the Enterprise computer is distinctly different from Data's "brain", and that this mechanical difference is why Data can have consciousness and the computer can't. What is that difference?"

It is that never, ever, are we given the impression that the Enterprise computer is the equivalent of an *artificial brain* in the scientific sense, whereas it is extremely obvious from the onset that Data’s is one such creation.

PETER G.― "3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean?"

And there you have it: we are clearly having two different conversations. You are speaking Trek-speak. I am speaking of science. But the good thing is, the two can actually combine. I see this episode as an invitation, to all viewers, to further investigation of these elevated matters. I suggest you investigate, Peter. It’s much easier today than it was in 1989 ;)

As for William, you are absolutely right when you write that "the episode is of course not suddenly worthless if the characters within it make wrong or incomplete arguments." I wish in no way to detract from this wonderful episode, and I greatly appreciate what Snodgrass tried to do here, and indeed, mostly accomplished. And I could not possibly expect the writer back in 1988 to be an expert on artificial consciousness.

This is thus merely to say that I find this particular talk of ours a little difficult, because you both tend to use in-universe arguments much more than I do. You just wrote, for instance:

WILLIAM B―"However, I am not that certain that the brain being a physical entity is what is important for consciousness. Of the major holographic characters in the show... "

That is of course completely legitimate: to consider what Star Trek says, and not science―to judge Star Trek on its own terms. And I am frequently impressed by the amount of detail you seem to remember. Allright, then: what you then must do is investigate the coherence of the in-universe cases. Allow me three examples:

You first give an outstanding example of what I mean with an in-universe case:

WILLIAM B―"Further, we know from, e.g., The Schizoid Man, that Data's physical brain can support Graves' personality in a way that the Enterprise computer cannot (the memories are still "there" but the spark is gone)."

Precisely. But then, you write:

WILLIAM B―"Minuet is revealed to be a ploy by the Bynars, and whether she is actually conscious or not remains something of a mystery..."

No: it is only a mystery in the moment. The later example you give above retroactively affects "11001001," as it proves, even in-universe, that she cannot be conscious. Minuet is a program running on the Enterprise computer, just an even better program. But in your excellent wording: there is no spark.

And then you write:

WILLIAM B―"The holographic Leah actually is made to be self-aware..."

Maybe it is, and maybe it isn't: we must distinguish between various levels of self-awareness and consciousness. Many robotic devices on Earth today are beginning to exhibit the simplest traits of what to an outsider might appear as rudimentary self-awareness. But we must distinguish between self-awareness and mere artificial intelligence, or basic programming. If a robot vacuum-cleaner drives around a chair, you don't consider it sentient, do you? And if a robotic lawn-mower were programmed to say: "I'm with you every day, William. Every time you look at this engine, you're looking at me. Every time you touch it, it's me," you wouldn't call it self-aware, would you?

An example: toys are being programmed to react to stimuli, and can both say “Ouch!” and cry, etc. Questions:

1―Is the robotic doll that identifies a chair on its path, and walks around it, or even sits on it, self-aware?
2―Does it hurt the doll that says “Ouch!” if you drop it―even if you provide it with sensors able to measure specific force, and adjust the “Ouch!” to the force of impact?
3―Is the doll that cries if you don’t hug her for hours truly sad―even if it is programmed to cry louder the longer she isn’t hugged?
4―If the doll is allowed self-programming abilities, and alters its crying to sobbing, does that alter anything?
5―If the doll is programmed to say that it is a doll, manufactured at such and such place, at such and such date, and that its name now is whichever you have given it; and that it will take damage to its internal circuitry if you kick it, and beg you not to kick it as it will damage it, and hurt it, and begin to cry and sob, does that constitute any degree of self-awareness, or consciousness?
6―If you multiply the level of programming complexity a zillion times, does that change anything at all?

Would HAL dream? When that film was made, a considerable number of scientists would have answered yes. Not so today. The numbers of scientists who adhere to the thought that any sufficiently advanced computer program will result in artifical consciousness―a major group some fifty-sixty years ago, when Computational Speed was a deity to be worshipped and Man would have flying cars by the year 2000―have dwindled considerably since this episode was written. Paradigms have changed. Today, most say: computational speed, and global volume of operations, matters not. An infant child asleep has considerably less brain activity than a chess champion during a tournament match. That does not make it any less sentient.

Maybe you remember those days when this episode was written. The ordinary public, to which I suspect most Star Trek writers must be considered in this context, were marvelled then―or terrified―by IBM's Deep Thought (I live in Copenhagen, remember? I’ll never forget when it beat Bent Larsen in 1988), and the notion that a machine would, some day soon, beat the best chess players alive―regularly. And as I have written elsewhere, today any smart phone with the right app can beat the living daylights out of any international grandmaster any day of the week. But it isn't an inch closer to having gained consciousness, is it?

This divide, of intelligence vs consciousness, is extremely important. Today, we have researchers in artificial intelligence, and we have researchers in artificial consciousness. The divide promises―if it hasn’t already―to become as great as that between archaeologists and historians, or anthropologists and psychologists: slightly related fields, and yet, fundamentally different. The problem is, that most people aren't aware of this. Most people, unknowingly, are still in 1988. They conflate the terms, and still speak of irrelevant AI (see this thread!). They still, unknowingly, speak of Deep Thought only.

So my entire point is, this episode ends up being about Deep Thought. While the underlying, philosophical questions it asks, which science-fiction writers have asked for nearly a century by now, are sound, and elevate it, it is of course a child of its time: at the concrete level, it misses the point. It wishes to discuss the right, abstract questions, but doesn't know how to do it at the concrete level: it essentially reduces Data to Deep Thought.

But... I believe this was on purpose! As William points out, we had recently had the Ira Graves episode. In Manning & Beimling's story, the nature of Data's positronic brain was key. But to Snodgrass', it was detrimental. I am convinced that she was fully aware of the shortcomings of her story: I believe that she doesn't use Data's positronic brain as an argument, because it is a devastating one: it shreds Maddox apart. There would be no episode if she made use of it. And even worse: many viewers, in 1989 and even today, wouldn’t understand why. So she wisely ignores it: she refers Data’s positronic brain en passant only, but does not use the logical consequence of *a frakking artificial brain!* during the trial itself. Instead, she uses arguments of the Deep Thought kind viewers might be expected to understand in 1989―and today. And so we get this compelling drama. It’s on a much lower level of specific abstraction, but it can be enjoyed by many more.

And for once, I can accept that choice. Normally I call this manipulative writing: it's like 'forgetting' Superman has super-strength and allowing common thugs to kidnap him, because we have to get the story started. But in this case, in the name of the higher purpose, I gladly give it a pass.

The problem in all this, of course, is that the various episodes wish to captivate the audience. Therefore, they have to allow for the possibility of the impossible, and they have to leave things as mysteries: is Minuet, or Moriarty, sentient? Of course not. But just having Geordi laugh and say "Captain, that's just a program!" and kill the magic right then and there would kill the episodes. Still, we must not let good story-writing cloud our judgement. We must be able to enjoy a good story, while saying, "Wonderful! In real life, however... Captain, that's just a program!" Ironically, B'Elanna actually bluntly says just that of the EMH. But apparently, few of the fans take her seriously.

On another matter, William made an astute observation:

WILLIAM B―"It is worth noting that most forms of intelligence in TNG end up taking on physical form... "

Allow me to improve on that with my favourite theme: they end up taking *humanoid* form. The only reason people take Leah, Minuet, the EMH, Moriarty, and perhaps even Data seriously, is very simple: they look human. Make them a teddy-bear, a doll, and a non-humanoid robot, and these conversations wouldn't be happening.

I'll give you an example: ping-pong robots and violin-playing robots. Industrial robots have extraordinary motion control and path accuracy these days. But if playing a violin is no different than assembling a Toyota, playing a human requires a lot more than your standard cycle pattern deviation. Yet, pretty soon, robots will crush the best human ping-pong players as easily as chess engines today do chess players. And already today, ping-pong robots show good AI. Now, combine an advanced ping-pong cum violin-playing robot in a Data-body, and let it entertain the audience with some Vivaldi before destroying the entire Chinese Olympics team one by one. How many would start wondering: how long until it becomes alive? How long before it will dream? But show a standard ping-pong robot do the same, and they'll simply say: cool machine.

Now imagine a non-humanoid lifeform, leaving humans no possiblity of judging whether it was sentient or not. So you’re right, William: if you'll pardon the pun, form is of the essence.

...and by the way: notice that you did it again. "Intelligence" is irrelevant, William. Intelligence can be programmed, already today. Consciousness cannot: we can barely understand it.

Finally, a couple of last notes:

1. We may wish to simply turn off Leah, or Minuet―*and just do so.* The EMH, due to the importance of its function, cannot simply be ignored. But being forced to treat a program as self-aware does not make it self-aware.

2. WILLIAM B―"I guess one question is whether nonlinearity would actually be necessary to convincingly simulate human-level intelligence/insight"

Exactly: it wouldn't. But notice, just like Peter G. above, your choice of words: "SIMULATE." Who on Earth cares about simulations, Turing tests, and mere artificial intelligence, William? Do. Or do not. There is no simulate.
Peter G.
Mon, Jun 27, 2016, 1:11pm (UTC -5)
I'll just quote the relevant section, which includes quotes from both of us:

"PETER G.― "3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean?"

And there you have it: we are clearly having two different conversations. You are speaking Trek-speak. I am speaking of science. But the good thing is, the two can actually combine. I see this episode as an invitation, to all viewers, to further investigation of these elevated matters. I suggest you investigate, Peter. It’s much easier today than it was in 1989 ;)"

I must ask you what your credential is to speak to authoritatively on this subject. I would like to know if you have professional expertise in any of the following fields: computer engineering, neuroscience, mathematics, linguistics, information theory, fluid dynamics, quantum field theory, grand unified/string theory, or any other field somewhat close to these that gives you an understanding that a well-read layman would lack. If so, I'd like to know so I can understand your comments within that context and learn what you mean and where you're coming from. If not, I don't know how you can claim to know so much about "science" that you can reduce my comments to be merely references to Star Trek and not to real life. In point of fact I tend to try to discuss Star Trek logic on its own terms because the show is *fictional* and thus isn't actual reality. If you wanted to strictly discuss reality you'd have to tear the whole show apart. I choose instead to see how the show deals with its own premises, and I think William does too.

However, in terms of real world science and logic, I asked you a direct question about what you mean when you speak of "non-linear" data processing, which you claim Data and humanoids have and which mere machines don't. Your reply was to curtly state that we're having two different conversations. I would like to begin with at least one, since so far I'm not even sure what it is you're arguing. All of my objections above were meant to elucidate that I wasn't sure you were articulating a real argument about Data as opposed to making some conjecture that cannot really be discussed. However, maybe you have something specific in mind when you speak of non-linear processing and how Data's 'brain' in constructed, and if so I'd like to hear it.

And by the way, in modern computational theory it is by no means a discarded notion that a sufficiently complex series of recursively communicating circuits might form what we call consciousness.
William B
Mon, Jun 27, 2016, 3:07pm (UTC -5)
@Andy's Friend:

In addition to what Peter G. said, I want to clarify my position a bit. I appreciate your comments very much and I think you may be onto something with regards to what Data is and represents. However:

1) I think it's still by no means made absolutely clear that Data has an "artificial brain" which meets the specifications that you state it does. Don't get me wrong. I am happy to believe that Data does. And with Graves, as we discussed, there is an indication that Data's brain can support a human identity better than the ship's computer can. However, we are still left with the possibility that Data's positronic brain is simply better at processing and reproducing human-like behaviours. We do not know for sure that Data actually is housing Graves, rather than simulating him in a way that is indistinguishable from the real thing. Nor do we know that Data is not generally simulating consciousness rather than actually doing so. Similarly, that Data's positronic brain is very hard to reproduce, and causes cascade failures in the case of Lal, etc., is no guarantee.

The references to Data's positronic brain are many and this supports your contention that there is something about Data's brain that is capable of consciousness in a way that traditional computers lack. However, there are other explanations. They may simply call Data's positronic brain a brain because, well, he is an android, designed in the shape of a human. The control centre of the automaton "body" made to resemble a human is located in the part which is made to resemble a head, and performs a function which is at least superficially similar to a humanoid brain, and thus it can be called a brain.

You said in an earlier comment that it is the fault of the show that it fails to establish what Data's artificial brain does. This still assumes that it is a settled issue that Data *has* an artificial brain, rather than something which people, for convenience, call an artificial brain. It may not be. Even if we accept your premises as definitely true -- which, perhaps you have more expertise in this area than we do -- it is still a leap that Data fits this definition you are stating. Maybe he doesn't *really* have a "nonlinear" brain, and according to the proposal you have put forward, Data only simulates consciousness.

And even then, even if he definitely has an artificial brain, I don't think it's absolutely true that if Data does have something which was *designed to be an artificial brain*, that this would tank Maddox' argument. What you are stating, essentially, is that it will at some point be possible to distinguish between what is actually conscious and what isn't not based on behaviour or anything, but based on the physical make of the object itself. This may turn out to be true, and maybe several experts in the field believe it to be true. But I don't think that it is that settled. I mean, what if a "nonlinear brain" is simply much better at producing external signs indicative of consciousness, and, in fact, some of those external signs include the *physical makeup* itself, which is more similar to (humanoid) brains than traditional computers?

And you know, I basically agree that Data probably has an "artificial brain," as you say it, but I don't know how you make this claim with certainty. Soong could also have simply called Data's brain an artificial brain for PR purposes. This is why the issue of simulation is important. It is my contention that one has to look at the outcomes produced by the "brain" or "computer" rather than the form that the "brain" or "computer" takes.

2) And even if Data definitely has an artificial brain which meets these criteria, I think it's still, as Peter G. says, not at all a settled issue that a sufficiently complex program (recursive, he emphasizes, but I will be a little more general) would not be conscious. The reason I emphasize whether or not it is important that a "nonlinear computer" could *simulate* consciousness is that I am going under the assumption that it is impossible to conclusively prove consciousness, UNLESS one is the conscious entity.

Line up a human, Data, the Doctor, and, say, Odo; some sufficiently non-humanoid, non-"artificial" life form which displays the *external* traits of sentience -- ability to learn and adapt, an ability to change and exhibit new bheaviours, and perhaps the ability to state that it is, indeed, alive. Which is conscious? You would argue that the human, Data and Odo are and the EMH is not. I would argue that it is impossible to be sure; EVEN THE HUMAN can only be verified to be conscious because he is sufficiently similar to me. I am not making this egocentric idly; what I mean is that it is impossible for me to be sure that the human is not a sufficiently advanced automaton, perhaps created by nature, perhaps otherwise. I don't actually know that I have free will, even; I know with certainty that I experience the thing which I define as "consciousness," and because of the extreme level of similarity of other human beings to me, I must reasonably assume that they have the same trait. This assumption then can reasonably be carried out to other humanoid life forms, which in terms of modern biological classification would even be the same *species* (since interbreeding between humans and Klingons, Romulans, Vulcans, Betazoids, Ocampa etc. are possible and in most of those cases we also see that their offspring can reproduce, as are interbreeding between Cardassians and Bajorans, though I can't think of Cardassian-human or Bajoran-human offhand). But then with Data, the EMH and Odo we are left with beings which are completely different in construction and origin. How would we conclude that they are conscious? Or *not* conscious?

Maybe we could identify some physical system which "explains" our consciousness. But even then we would be left with uncertainty whether other systems which are physically similar are actually conscious as well, but are simply reproducing the machinery but missing some unknown spark. We could, I suppose, claim that we know that Odo is (probably) conscious because there is no evidence that any conscious beings set out to create him, and to argue that it is unlikely for a being which displays traits consistent with consciousness to develop "by accident" without consciousness being there as well. However, of course, with Odo and other changelings, they are of course imitative; it is baked into their very nature that they imitate and recreate other beings. The changelings might simply be some sort of inorganic matter which for whatever reason imitates other, actually-alive beings, and then displayed external signs of being alive.

The reason I bring this all up is that my claim is that it is still anthrocentric to make the claim that the "nonlinear" versus "linear" distinction is all that matters, because it just moves the trait that defines consciousness from external signs of consciousness to the sort of physical, observable mechanisms that produce it. It is still making an argument based on what looks human, but instead of arguing about actions it is arguing about hardware.

So my claim is that it may be that consciousness develops when there is a sufficiently advanced system to reproduce all the external signs of consciousness; that the act of simulation is itself an act of synthesis. Understand me: I am not saying that, e.g., someone writing "I am alive" on a piece of paper is sufficient to reproduce life. But to be able to reproduce the full breadth of human behaviours (or, indeed, sufficiently complex animal, perhaps) may not be possible without producing consciousness along the way. This is perhaps an idiotic notion. It gets rid of some of the problems of anthrocentrism -- decentering away from the physical form and makeup of the thing which produces "apparent consciousness" -- but introduces another, in that the only way to define consciousnessmeans to define something which acts sufficiently *human* to be able to appear conscious. I have little to say to that charge except that I'm thinking about it.

3) Just as a small point, the Exocomps and the "Emergence" life form did not take humanoid form. The Exocomps are very close to the hypothetical "box on wheels" Maddox insisted would not be granted any rights, which is why I think that episode is (despite some significant flaws) an important follow-up to TMoaM. The "Emergence" life form does use human forms on the holodeck, but its eventual form is a funky-looking replicated series of tubes and connections.

4) As far as the suggestion that Peter and I were talking about Trek and you were talking about the real world, there is a lot to say, but I don't think it's so clear-cut. You have decided that what is being portrayed, in-universe, is that Data has an artificial brain and that the Voyager's computer is definitely not an artificial brain (and thus that the EMH, run on this and other similar computers, cannot be conscious). This still relies on evidence provided in universe and, *where evidence is lacking*, filling in the gaps with your own impressions of the intent. If it is not utterly conclusive that Data's brain is different from the ship's computer, you rely on the idea that the computer must be equivalent to modern "linear" computers, which is still an assumption about intent. There is nothing wrong with this, but I think that it just means that you are also "down in the muck" with the rest us of trying to interpret what Trek is actually saying, rather than purely talking about the real world. ;)
William B
Mon, Jun 27, 2016, 3:55pm (UTC -5)
Here is a somewhat off-topic comment expanding on my point (4). Notably, I do not expect this to be answered within the thread, necessarily, but I think it's important for me to say a bit more of what I think this episode is about, and why it is important, as well as what I think Data (and the EMH) are about, as characters, and why they are important, in addition to being about artificial consciousness/intelligence/life etc. issues.

As far as the charge of us talking Trek speak rather than real life speak: well, certainly real life issues are the most important. However, I think it's fair to say that all Trek is commenting on the real world, it is just that some of it is commenting more directly than others. Data, the EMH etc. are obviously representations of real-world ideas, to some degree or another, but it's a question of how we interpret them. That Data is definitely a representation of a potential being with an "artificial brain" is by no means certain. It is also very possible that the (limited, contradictory) take on artificial consciousness within Trek is primarily there to talk about other aspects of human life -- how humans treat each other, how we treat other (biological) life forms on the planet, etc. -- and so for those purposes, the differences between Data and the EMH might not be important at all -- they might just be different views on the human condition.

There are autistic and Asperger's communities which have used Data as a sort of mascot. Data's experience of difficulty understanding "human," i.e. normative, emotions, his alienation, and other traits like that have been taken on as representative of some humans who find this "artificial life form" a good representation of their experience. This was not exactly the intent of the character, but I think that part of the mythical basis for artificial beings is to talk about difficult aspects of our plight as humans -- of being physical, material beings whose worth is often decided collectively by our ability or inability to fit in with larger conceptions of humanity. Again, Louvois' ruling in this episode includes: "Does Data have a soul? I don't know that he has. I don't know that I have." There is no reason that the EMH is necessarily precluded from being a particular kind of representation of person, in which case I think it may be missing the point to declare him as non-conscious. You can say that this should *not* be the point of Voyager, that in its portrayal of computer coding it should hew more closely to what experts believe is the ultimate signifiers of consciousness, and you may be correct, but I think that in order to talk about what Trek means we have to suss out what it is trying to say and how that relates to our world.

As I see it, part of the problem with indicating that an "artificial brain" is the key difference between Data and the EMH, and one which would simply destroy Maddox, is that it is to some degree divorced from human experience up to this point. That does not mean that it won't be proven in the future, and that moment might fundamentally change human existence. But the majority of human existence has been a matter of taking blind stabs in the dark, trying to reason outward from ourselves to beings sufficiently similar to ourselves and to extend to them the things we would like extended to us. And that comes with it a level of uncertainty about ourselves. *I do not know that I have a soul.* The fact that the "code" that runs in our brains is sufficiently different from that in a computer does not guarantee that we are not simply an advanced computer or that we are not a mere set of physical processes designed for self-replication, with our experience of consciousness being some sort of nearly irrelevant by-product of a self-sustaining system, ultimately no more intrinsically meaningful than a process like fire. One of the key things that TNG does is introduce, from the very first episode, the idea that it is also not merely a matter of humanity deciding which other entities should have rights, but that we have a responsibility to prove that we as a species demonstrate qualities that make us more than children groping about. The Q could easily, and do, look at us and declare, how could a lump of matter with a bunch of electrical processes controlling a bit of organic machinery be conscious in any meaningful sense? That there is no absolute certainty that Data is the same as humans is important, because this is partly a way of looking, anew, at things we take for granted about *humans*, of stripping away which traits are ultimately irrelevant in defining our place in the universe. Granting that Data has value is a way of granting that we have value, through an act of faith. If the whole process is genuinely reduced to a matter of a binary switch wherein some given object either is or is not a brain, *and that this can be verified with certainty*, robs the story of much of its power and also removes the uncertainty that is near the heart of the human condition.

As Picard asks Maddox, I would say: "Are you sure?" Are you sure that the artificial brain correctly distills the essence of what is important about humanity, consciousness, and rights-having beings? Now, of course, I may be misinterpreting your claims. But I think that it is not merely a matter of Snodgrass hedging her bets for the sake of drama that an episode entitled "The Measure of a Man" avoids some sort of ultimate determiner of worth in consciousness. I think that the uncertainty is a fundamental part of human existence, and important to every person who has ever wondered, in real life, if they do not matter, and had to take a leap of faith to believe that they did, as individuals and as a species. If there is some sort of "magic bullet" to the consciousness debate wherein the, or a, physical mechanism of *all* consciousness is identified, then probably the debate over which beings qualify as life forms will shift away from "consciousness" and into another trait which is, once again, mysterious. Once the physical mechanisms of consciousness are sufficiently identified, after all, then we might well understand it enough to be able to do away with discussing human behaviour in terms of a "spark" instead of the result of eminently comprehensible physical processes, albeit very complex ("nonlinear") ones, which may again require us to take a leap of faith to believe ourselves more than just the physical process that governs us.
Andy's Friend
Mon, Jun 27, 2016, 6:05pm (UTC -5)
@William B & Peter G.

I was writing to Peter, but I'll answer William's last comment first because it can be done very quickly: I basically agree with everything you wrote.

I think you're quite right about the episode being robbed of its power without uncertainty. It dares ask great questions. It follows it should not provide certain answers. And you are right: knowingly believing in something uncertain is a very powerful thing. It is what makes faith, true faith, indestructible.

I also think you're very, very right regarding Data's multiple roles, as in a mascot of the autist & Asperger's communities. What makes Data so fantastic is that he is so many people in one: the Child, the Good Brother, the Autist... The Android is actually pretty far down the list in importance. This is undoubtedly why he is so beloved: most of us can find a part of ourselves in him. Mirrors, was it, William?

I have a little more difficulty in seeing the Doctor in quite the same multi-faceted fashion. Every Star Trek fan I know likes the Doctor a lot, but for very different reasons that they like Data: they do not receive the same kind of love.

I particularly like your reference to Q, because, as you'll remember, that is my recurring theme: the humanoid & the truly alien. And you're of course right: any truly alien might question our human consciousness; and, if we widen our scope, what I have called the "artificial brain" is merely a word for some sort of cognitive architecture which may be very different from our own. The Great Link seem to have one, and I'm pretty sure it's quite different from Data's brain.

Also, and this is answering both of you now, it is true that we cannot know with absolute certainty that Data's "positronic" brain is an artificial brain. There are strong indications that it is, but we cannot know for sure; and it is true that Data, too, could simply be another Great Pretender.

This leads me to that most interesting aspect: faith. I was going to answer William earlier:

WILLIAM B―"I think that a system sufficiently sophisticated to simulate "human-level" (for lack of a better term) sentience may have developed sentience as a consequence of that process."

...by saying that that sounds an awful lot like wishful thinking. By that I mean that this is a little bit like discussing religion. If you strongly believe that (I’m not saying that William does), nothing I can say will change your mind. There are still highly intelligent scientists who share that belief, in spite of all the advances we've made in the past decades in both neuroscience and computer science. It is, quite simply, a belief, akin to a spiritual one. Some people *want to believe* that strings of code, like lead, can turn into gold.

But that of course is a bit like my belief that Data's positronic brain is an artificial brain, i.e., some sort of cognitive architecture affording him consciousness. I, too, *want to believe* that he has that artificial brain. Because to me, Data would lose his magic, and all his beauty, were it not so. As I wrote, there are very strong indications that this interpretation is a correct one; but as in religion, I have no proof, and I must admit that it is, ultimately, also an act of faith of sorts. I want Data to be alive. To me, Data wouldn't make much sense otherwise. And I know full well that this is, deep down, a religious feeling.


I'm sorry, guys, it's getting late here in Europe... Until next time :)
Peter G.
Mon, Jun 27, 2016, 7:20pm (UTC -5)
@ Andy's Friend, I still don't know why you're hung up on whether or not Data has an artificial "brain". You have yet to define what that means. Are you quite sure you're really talking about something, as opposed to issuing phrases that sound like something but have no content? If there's content then why not just say what it is instead of using placeholders? You haven't even provided an explanation for why the Human brain isn't just a sophisticated computer. Until you can answer my very clear point-blank question in any way (about what non-linear processing is) I'll assume you're not really interested in talking about this. I also assume from your lack of confirmation that you are not an expert in the field of robotics, information theory, etc etc.

Incidentally, I find this particular line somewhat accursed:

"...by saying that that sounds an awful lot like wishful thinking. By that I mean that this is a little bit like discussing religion. If you strongly believe that (I’m not saying that William does), nothing I can say will change your mind."

If positing a theory about robotics makes someone a 'religious believer' that you can't communicate with, then I find it hard to believe you are making such absolute declarate statements with a straight face. If you know something special about this then own it and lay it down for us. The condescension needs real creds to back it up, my man.

Andy's Friend
Tue, Jun 28, 2016, 5:18pm (UTC -5)
@Peter G. & William B

Peter, I don’t know what it is you don’t understand. Everything you ask me to clarify I already have in my two posts from 2014.

Try reading what William writes. He has understood perfectly what I mean:

WILLIAM B―”What you are stating, essentially, is that it will at some point be possible to distinguish between what is actually conscious and what isn't not based on behaviour or anything, but based on the physical make of the object itself.”

Which is partly (but only partly: see below) correct, and what I wrote to begin with in 2014 specifically about Data & the EMH:

“It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.”

In very simplified terms, not WHAT, but HOW. But I further clarified yesterday:

“if we widen our scope, what I have called the "artificial brain" is merely a word for some sort of cognitive architecture which may be very different from our own. The Great Link seem to have one, and I'm pretty sure it's quite different from Data's brain.”

Another thing: you seem to have misunderstood my point about the “religious” aspect. What I mean is that we all, deep down, are predisposed not merely to accept, but to actively prefer, and choose, one specific possibility, one theory, as true. Einstein famously did it, and it took him many years to recognize his fault. It’s just the way we humans are. In this, our opinions are akin to religious beliefs: William B’s, yours, and mine. Some of us are better at listening to reason than others. But as long as matters remain highly speculative, no reason is more true than any other. And all we really have are our humours, our moods, our feelings (because even intellectual choices are based on emotions) to guide us.

So your comment:

“If positing a theory about robotics makes someone a 'religious believer' that you can't communicate with...”

...was completely uncalled-for.


Now William:

Having said that, I do believe that you have a point, and the Great Link is a good example. What I mean is, to use your words above, it is necessary for us to be able to *recognize* and *understand* the nature and abilities of the "physical object" itself.

In other words, while we may be able to recognize Data’s “positronic brain” as an artifical brain able of consciousness, simply because it resembles and emulates what we know, we may not be able to recognize anything as alien as the Great Link as another kind of physical object capable of consciousness. And in such cases, at least at first, we will depend on behavioural analysis. And who knows if we will ever be able to understand the Great Link?

So in a way, both sides are right. And you are very right: we will probably always remain somewhat anthrocentric. It is difficult not to, when that is what we know and understand best. And if we indeed ever gain warp capability, who knows what new life we will encounter?

Finally, just to correct a slight oversight of yours, you wrote:

“You said in an earlier comment that it is the fault of the show that it fails to establish what Data's artificial brain does.”

No, that’s not what I said: I agreed with you. Try reading it again ;)
Peter G.
Tue, Jun 28, 2016, 6:06pm (UTC -5)
@ Andy's Friend,

I could not have been any more specific with my request for clarification, and yet for multiple posts you have dodged entirely. I thought we were having a friendly discussion but it begins to feel more like you're hiding behind words. If you think my question is "already answered" in your previous posts then you must think my reading comprehension is pretty low. I am now 99% convinced you have no idea what "non-linear processing" means, and likewise what the mechanical difference is between a "computer" and an "artificial brain." And yet you base your entire argument on these terms, claiming definitively that this is the case and that I need to go do research to catch up to your level.

And by the way, mentioning that each of us sticks to his "one" belief out of bias as religious people (or all people) do makes two fatal mistakes: 1) It assumes that each of the three of us is making comparable unprovable claims. 2) Implying our ideas are no better than faith-based convictions puts all ideas on an equal irrational playing field, which is both insulting to reason itself and also insulting on a personal level.

1) We are not all making strong claims. William and I were tossing around ideas and wondering what to make of the episode. You are making a bold and definitive claim, and stating that it amounts to "real science" as opposed to Trek-speak. The burden is on you to demonstrate any validity to what you're saying, as you are the only one making a strong claim here. I said your idea is plausible; you say it's true. William and I both agree that based on what you've said so far you cannot know this is so.

2) I don't go in for this passive-aggressive argumentation style, where when called out on BS you go ahead and say that my opinion (or William's) is just some faith-based hunch that can't be reasoned with. It's just as rude as calling us morons as far as I'm concerned. I know you included yourself in that description, but calling all three of us idiots still means calling me an idiot, which I don't accept.

PS - I'll bet $100 cash right now that you can't explain in detail why Einstein was "at fault" for pushing for one theory to be true. What is this so-called fault he was wrong about for so long? And bonus points if you can show that it was because he was naturally inclined to "want to believe" just "one theory". Spoiler: what you said about Einstein wasn't true. The only 'fault' to date his theories have admitted is with the cosmological constant, and that was only because he didn't have access yet to data showing an expanding universe. And that "mistake" has been replaced with dark energy anyhow, which is the same accounting trick in reverse, so even his idea of how to deal with the problem is still considered to be correct. Nothing about relativity has, to date, been called into serious question in the mainstream, nor has even his comment about god playing dice, about which the jury is still out. The Copenhagen interpretation of QM is not a "fact".

Submit a comment





Notify me about new comments on this page
Hide my e-mail on my post

◄ Season Index

▲Top of Page | Menu | Copyright © 1994-2016 Jamahl Epsicokhan. All rights reserved. Unauthorized duplication or distribution of any content is prohibited. This site is an independent publication and is not affiliated with or authorized by any entity or company referenced herein. See site policies.