Star Trek: Voyager

"Heroes and Demons"


Air date: 4/24/1995
Written by Naren Shankar
Directed by Les Landau

Review by Jamahl Epsicokhan

"It brings on the spirit of the bear and gives us strength to swing our swords."
"It's more likely to bring on profuse sweating, convulsions and acute delirium. This is a fungus common to sub-arctic climates and, let me assure you, quite poisonous."
"Yes, but those it does not kill it makes strong. A most hearty plant."

— Freya and the Doctor

When Ensign Kim runs a holodeck program of the medieval tale "Beowulf," a mysterious presence seizes control of the holodeck, causing crew members who enter to vanish without a trace. Only the Doctor may have the ability to successfully investigate and retrieve them.

When Kim disappears inside his program, Janeway sends Chakotay and Tuvok to investigate. Upon entering the holodeck, they discover the shutdown routine and safety devices do not work (surprise). They come across a warrior named Freya (Marjorie Monaghan)—the king's daughter—who has information that may help them. Kim, who was playing the part of the hero Beowulf, was killed by the evil monster Grendel, she explains. Chakotay tells her that he is Beowulf's kinsman, here to avenge his death, and that he wishes to face Grendel. Tuvok and Chakotay hope to find answers to the holodeck takeover from Grendel. They find evidence that Kim apparently went through the process of matter-energy conversion (uh-oh) due to some unthinkable malfunction. When they encounter Grendel, however, they become victims of matter-energy conversion.

Sporting the ultimate in anti-climactic teasers and a completely by-the-numbers first act, "Heroes and Demons" initially looks like a failed Voyager concept to be the first to join the series of cliched Next Generation holodeck plots. Fortunately, the story takes a perfectly appropriate and creative twist in the second act by putting the holographic doctor in the center of the action.

Using information Chakotay and Tuvok retrieved before vanishing, Janeway concludes that Grendel is the key to the mystery and that sending more people into the holodeck would likely result in their vanishing as well. This leads to the idea of transferring the Doctor's program into the holodeck to investigate. Since he's a hologram with no matter, he conceivably would be unaffected by the problems in the holodeck.

This is where the episode starts to pick up. One limitation Picardo's character has faced up to this point is that he's a doctor—and only a doctor. Not only is he often limited to spouting medical technobabble, he's been faced with the prospect of doing all his acting on the sickbay set. This plot gives Picardo an opportunity to escape the confines of sickbay and play the hero while also offering a startling amount of depth to his character. Several scenes reveal that this character has feelings and desires beyond the limitations of his programming, and, hopefully, future episodes will put the Doctor's new characterizations to use.

The Doctor even chooses a name before leaving for his mission—Dr. Schweitzer. (However, since he decides to abandon the name by episode's end, I will continue to call him "the Doctor" to avoid confusion.) In the holodeck program, the Doctor meets Freya, who takes him back to the perishing kingdom hall, which sits in defenseless terror of another attack by Grendel. The setting benefits from a number of lively holodeck characters. Freya's strength and compassion balance to make a respectable heroine, while the menacing Unferth (Christopher Neame), a warrior who doubts the Doctor's abilities, proves to be a believable roadblock to our hero's progress. King Hrothgar (Michael Keenan) is a pitiable, helpless man who has to watch his kingdom fall in front of him.

The Doctor's ability to allow matter to pass through him convinces the kingdom that weapons can't hurt him. Perhaps they have hope of repelling Grendel after all. The kingdom hall cheers and labels him their savior. Here, Picardo's looks of confused bewilderment are priceless.

In fact, this is a good episode for Picardo all around. It features his genial presence at its best, working well for the episode's humorous moments and dose of mild goofiness, as well as the serious character-driven scenes. A scene between the Doctor and Freya provides some genuinely effective soul-searching dialog. The moment when these two holographic characters kiss works surprisingly well. (The idea may seem slightly inane, but it displays one of the best qualities about Star Trek stories—their ability to take the audience anywhere.)

The plot, alas, turns into another lame-brained exercise in the obvious when Torres and Paris discover the mysterious entity in the holodeck is another misunderstood lifeform which the crew had beamed aboard the ship after mistaking it for an energy source. The "Grendel" alien changed the crew members into energy in retaliation for the crew's inadvertent "kidnapping" of two others of its race. Give me a break. You would think the crew would've learned their lesson after "The Cloud." This revelation can be so easily predicted a mile away that I wanted to slug the characters for taking so long to figure it out.

But I don't care too much that the life form plot isn't particularly inventive. The Doctor's adventures in the holodeck are much more important, and they work. In order to negotiate a peace treaty with the aliens, the Doctor must hand deliver one of the captured life forms to the "Grendel" alien as a gesture of good faith. On the way he is confronted by Unferth, who attacks him after mistaking the lifeform for a talisman to destroy the kingdom. Freya shows up in time to save the Doctor's life but pays with her own. Freya's death scene also works very well with a bit of theatrical aura.

Naturally, the Doctor is successful in his task, and all three crew members are released unharmed. Janeway logs a commendation in the Doctor's file for his successful first away mission. "Sounds like you had quite an adventure in that holodeck, Doctor," she says. Yes, he did.

Considering how many facets of the Doctor's personality and emotions this hologram program ends up tapping into, it's quite a substantial episode for the Doctor. Some viewers may find themselves saying, "It was just a program. Freya wasn't a real person, and she didn't really die." I say don't overanalyze the situation. Remember, the Doctor is only a hologram himself. (I guess, in a sense, these holo-characters are his own people.) Restarting the holodeck program to bring Freya back may sound like an easy and obvious solution but would also constitute poor drama.

Aside for the recycled bit with the lifeform, I'd like to see more like this from Voyager. An ambitious score by Dennis McCarthy and convincing production design supply added bonuses. But please give the Doctor a name already.

Previous episode: State of Flux
Next episode: Cathexis

◄ Season Index

62 comments on this review

Tue, Oct 20, 2009, 1:29am (UTC -6)
Since we know the Doctor doesn't ever really get a name, remind me again: do we ever at least get a "Doctor... Who?" joke out of the show :-)

The Doctor was definitely a highlight of VOY. This episode is only one of the first where he gets to, ummm, shine!
Wed, Feb 2, 2011, 10:02pm (UTC -6)
I don't know if anyone noticed this problem... but didn't we just see an episode like this with "The Cloud"?

Sure, it's totally different... but this is 2 times that Voyager mistakenly did something bad to an actual life-form that they thought was an energy source.

I guess they need energy in the delta quadrant, but these kinds of episodes seem a little out of place as a whole. It reeks of the "anomaly of the week" problem, which has been overdone so many times on Star Trek.

As a whole, this series didn't really give us anything *new*. It was just more of the same, for 7 straight seasons.

People love and hate DS9 - but you can't argue that it was a new and different from its predecessor. Voyager, and later Enterprise, didn't offer anything new. And so after 11 seasons of mostly crap, we no longer have Star Trek on television, and might not for a very long time to come.
Fri, Apr 8, 2011, 9:50pm (UTC -6)
I can't get over how dumb it is that they are desperately searching for a new power source, and yet are still using the holodecks. Incompatible power sources my ass! Power is power, the whole thing is just ridiculous.

Holodeck episodes are the worst. In my opinion, they're worse than ferengi episodes. There have been a small handful of interesting holodeck episodes, but usually they are just goofy and annoying. The only truly great holodeck episode I can think of was Paper Moon on DS9, and this episode is no Paper Moon.

The holodeck program on this one was far too reminiscent of TNG's Qpid for me which, apart from a few hilarious one liners from Worf, I never want to be reminded of ever again. The Alien of the Week was totally lame and forgettable, and I could definitely have done without ever having to watch Tuvok and Chakotay trudging through the forest discussing poetic ways to describe Harry Kim.

All that said though, the doctor is wonderful. He somehow managed to make this show watchable for me. His sense of timing and humor is really dead on, and he is always a joy to have on screen.

The doctor gets four stars for making something likable out of this turd of an episode, and the plot gets zero stars for being another cliche, cornball, derivative holodeck adventure.

So, all in all, this one gets two stars from me!
Fri, Aug 12, 2011, 5:36am (UTC -6)
For the love of god is there any non-sentient form of energy anywhere in this entire freaking quadrant!?

Starfleet folk are always so respectful of the holodeck's 'rules' even when there are lives at stake and the safeties are malfunctioning and there's a seemingly malevolent entity running around disappearing their crewmen. What exactly is stopping them from simply stomping through shooting everything on sight? How come the Mysterious Entity of the week apparently agrees to be bound by the grendel_event program flag? The doctor isn't even solid, there's no reason for him to waste time interacting with anything at all here (though unlike Tuvok and Chakotay he at least has a motivation to want to take it all in.)

Ah I know, these are petty complaints, but I ran clean out of suspension of disbelief about 20 minutes in due to sheer boredom (having seen the semi-recent and entirely-horrible Beowulf movie certainly didn't help).
Fri, Aug 12, 2011, 6:14am (UTC -6)
Yeah, it's not hard to find so many logical problems and faults with the plotting with these episodes. It almost like the writers were just pumping out stories out of manufacturing plant... just putting all the basic genres into a pot, stirring it up, and seeing what shit came out.

Having said, if you take the episode for what it is, it's not bad. But the premise of the show - such as most shows in this series - is awful.
Thu, Sep 8, 2011, 11:15am (UTC -6)
This is one of the standouts of Voyager's first season! I, for one, put a far greater emphasis on the EXECUTION of an episode than whether or not some element of its plot may be derivative of another episode that has come before. "Heroes and Demons" has terrific production value, a stand out performance from Bob Picardo, memorable guest characters, and typically first rate direction from Les Landau. I've come to realize that episodes directed by Les Landau are wonders of light. Some which spring to mind: Sins of the Father, Family, Time's Arrow Part I & II, Chain of Command Part II, and The Chute.
Thu, Nov 24, 2011, 8:53pm (UTC -6)
Along with the above commentators, I expect a high standard of realism out of my 24th-century space-exploration sci-fi television shows.
Kazon Hornblower
Wed, Oct 10, 2012, 6:09pm (UTC -6)
Ugh, watching this episode is like realizing too late that you didn't wipe well enough. And now you have an hour's drive ahead of you on a hot summer day.
Mon, Sep 2, 2013, 4:54pm (UTC -6)
Tuvok: I would point out there are no demons in Vulcan literature.

Chakotay: That might account for its popularity.

Perfect Vulcan human interaction... what was missing from T'Pol in Enterprise, and definitely reminiscent of Spock... excellent line
Tue, Mar 4, 2014, 2:55am (UTC -6)
Once more, a very stupid premise for the malfunction of a holodeck. Really really silly. Not to mention, as others have already pointed above and before, how it hurts to watch the holodeck being used when the ship is short on energy supply. The excuse of different types of energy is unberable.

Even though, in the end it was an interesting episode when it focused on the existencial questions of The Doctor. Just as Spock in TOS, Data in TNG and Odo in DS9, The Doctor has a lot of potential for development and is nice when it starts to be delivered. I hope it keeps coming, since he highlights the show quite a bit. If only the rest of the episode was not so ridiculous, with a plot so embarrassing...
Sat, Jun 14, 2014, 1:19am (UTC -6)
I liked this episode, it was a fun holodeck romp that's executed well. The plot is a mishmash of things we'd seen before but the doctor stuff makes it unique.
Tue, Aug 19, 2014, 11:24am (UTC -6)
The holodeck scenario was pleasant. The aliens were very Star Trek. The Doctor was fantastic. Only real complaint is I wish to have learned more about these aliens. All in all, not bad. Quite entertaining in fact.

I don't complain too much about holodeck usage on Voyager (excepting that holodeck episodes will become overused). I realize they are an independent subsystem with their own reactors. I just wish for some better explanation on how it all works together.

3 stars.
Sun, Oct 5, 2014, 12:13pm (UTC -6)
I hate "theme episodes" (often holodeck episodes). ST is already a genre TV series: science fiction. If I wanted to watch another genre (cowboys, medieval, vampire...), I would watch another show!

That episode is the first in, unfortunately, a long list of theme episodes on Voyager (although they also appear on TNG and DS9...). Bleh.
Dave in NC
Sun, Oct 5, 2014, 4:25pm (UTC -6)
@ Charles

They are still humans, right?

If we actually had a holodeck, where do you think history buffs would like to spend their time? I'm sure plenty of people would like to try their hand as an Old West Sheriff, or maybe Caesar of Rome.

At least I would. ;)

Sun, Oct 26, 2014, 9:59pm (UTC -6)
Ugh, I hated this one. But before I get into my rant, just a few comments on the episode itself. It was awful. I mean, it's not just that the show was yet another holodeck malfunction episode. Or even that it was yet another random energy being episode (seriously, that's the third one this season!). It wasn't just the rotten awful science, which just sounds like stringing a bunch of random words together (a trend I'm starting to get real sick of). I mean, unusual photonic energy? Really? How are photons unusual?

But it was a dumb plot too. So Grendel just sits in the barn or whatever instead of going out to find his little comrades? Why did the Doc hang around talking to everyone when he was immaterial, and so could have waltzed right to the barn? Meanwhile, the whole point of sending the Doctor was that everyone thought Grendel couldn't kill him, but then Grendel chopped his arm off (now that's ironic...). So Janeway says it's a delicate first contact situation, but then sends the Doc back. Which is ridiculous, since he's not in any less danger than anyone else! Not that it matters, since the "delicate negotiation" was quite simplistic. Quite lucky that all it required was releasing the caged being in front of Grendel.

But no, that's not the real problem of the episode. The real problem is that there's no use pretending anymore. The Doctor is clearly sentient.

And that's terrible.

See, up to this point, we could believe that he wasn't sentient. He certainly didn't act like it in the pilot, freaking out at the thought of being the only Doctor. Sure, it kinda went back and forth, but we could still claim he wasn't sentient. That all of his odd personality quirks were just programmed into him. He wasn't singing yet, and he wasn't living his own life yet. But now he chose a name, built a friendship with some holodeck character, and felt heartbreak. He's sentient.

Doesn't anyone remember Measure of a Man? People claim it's one of their favorite TNG episodes, but it is destroyed by the Doctor. See, Picard and Guinan didn't actually prove Data's sentience. They only pointed out the devastating consequences if they were wrong. If Data was sentient, and Starfleet judged against him, then Starfleet would be allowing for the creation of a slave race. It was that reason why the magistrate refused to judge against him. This was a message partially reinforced in The Offspring and fully reinforced in Quality of Life.

And yet the message is blatantly, brutally eliminated here. Starfleet has essentially created a slave race. That's what this episode means. If the Doctor is sentient here, then it presumably means that he was always sentient. It's hard to imagine him becoming sentient in only a month or two, after all. Which means there are hundreds of sentient beings, locked away in computers, summoned only when needed, ignored the rest of the time. Hundreds killed without a choice in the Dominion War. Crusher sacrificed one to the Borg. All sentient, all without a choice.

And yet he's such a popular character. Just because he's sarcastic and snarky and Picardo does a great job. But it's hard to enjoy his character when it goes against everything the show claims. Now, I'm not one to say the show should always agree with me. I find Starfleet ethics to be poorly thought out, juvenile, or downright evil at times. But the problem is that this goes against everything in Trek ethos. It's a joke of an idea. And no one seems to care. This little problem is never, ever brought up. Well, I vaguely recall it being brought up a lot near the end of the show, but they had to turn the doctors into miners (the universal symbol of creating evil slave owners) in order to show it. They didn't want to admit that they made a huge mistake here.

It didn't have to happen that way. Instead we could have had the Doctor denying his sentience for a season or two. Why not? Why do we need a repeat of Data? Why not have a character that doesn't want to be human? All they had to do was keep him wanting to be nothing more than a Doctor for a little while longer. Kes would need to keep working at it, or maybe even give up. Perhaps he would eventually gain sentience over time. But then that would make him unique among the holographic doctors. Then it would be ok for Starfleet to use them like that. But alas, the damage is done.

Nice to know the producers no longer care about Data.
Mon, Oct 27, 2014, 8:29am (UTC -6)
Just to throw a bit of a monkey wrench in your argument.... I think the doctor was not sentient at this point in the sense that he is more than the sum of his programming inputs/commands.

He's specifically programmed to have compassion because he's a doctor. I think he might experience some stray feelings for her, but it's not until Lifesigns that I personally consider him sentient.

"EMH: I've been experiencing periodic lapses in concentration and difficulty handing objects. There may be a malfunction in my tactile acuity subroutine."

"EMH: You said before you knew me that you were just a disease. Well, before you, I was just a projection of photons held together by force fields. A computerised physician doing a job, doing it exceptionally well, of course, but still it was just a profession, not a life. But now that you are here and my programming has adapted, I'm not just working anymore. I'm living, learning what it means to be with someone, to love someone. I don't think I can go back to the way things were, either."

Before he figured out he was falling in love he thought the symptoms were a malfunction. Which, to me, means this was the first time he really experienced events that triggered "feelings" that weren't part of his original program.

So to recap

1) In S2s Lifesigns he starts experiencing feelings that were not part of his initial programming. Kes suggests that it was his adaptive program adapting.

2) In S3s Darkling and Real Life he starts projects to improve himself... tweaking his personality and imagining having a family. These are things that teenagers go through (trying on new personality aspects/imagining their life in the future). So now we have him experiencing emotions that weren't intended and life goals beyond being a doctor.

3) In S4 he finds a kindred spirit in Seven, another outsider and takes her under his wing as they explore the human condition together.

4) It all comes to a head in S5's Latent Image when he uses those emotions to affect a decision. In the situation in this episode I imagine his program flips a coin as he himself suggests "Two patients, for example, both injured, for example, both in imminent danger of dying. Calculate the variables. My programme needs to ascertain which patient has the greater chance of survival, and that's the one I treat. Simple. But, what if they have an equal chance of survival? What then? Hmm? Flip a coin? Pick a card?"

It was here when he first made a decision based on his own emotions, friendships and life. If you don't consider him sentient before now he certainly is by this point. And this happens 18 months earlier, right before Seven joins... so I'd peg his sentience as occurring somewhere between late S2 (Lifesigns is episode 19 that season) and late S3 with him fully realizing/dealing with the implications of it in early S5.

To me, saying he is sentient as early as this episode grants sentience to Vic (he has emotions/frienships as well), Minuet, Janeway's Michael (who experiences heartbreak) and other assorted holodeck characters. I'm just not willing to concede the Doctor is sentient here.
Mon, Oct 27, 2014, 8:37am (UTC -6)
I actually think that despite Voyager's flaws the Doctor has one of the top 5 or higher personal arcs in all of Star Trek.

He starts off little more than a tool, begins to consider having a life, makes friends, aspires to be humanoid (because being such an outsider is isolation), falls in love, develops his personality and hobbies, has goals, deals with the consequences of sentience (Latent Image), takes on a pupil, eventually stops wanting to be human and embraces what he is, dabbles with wondering where he belongs and eventually even gains the ability to side against his friends in a hologram civil war and even nearly murder someone in Critical Care.

The only serious misstep in his arc (in my opinion) is the way that in Equinox deleting his ethical subroutines turn him into a mindless slave for the Equinox crew. I feel like that did him a disservice considering how far he'd come. I'd sort of preferred for them to have deleted the ethical subroutines and to have had him turn around and murder Ransom. Without ethics I'd still be me, I'd just not care how I went about accomplishing my goals anymore. So it sucked that they deleted that and he wasn't him anymore.

But really that's one of very, very few problems with his arc over 7 years and it's the only glaring one to me.
Mon, Oct 27, 2014, 12:07pm (UTC -6)
Skeptical & Robert,

The EMH is one of my all-time favorite characters.

Robert expertly listed and discussed his path throughout the series so I won't regurgitate.

The issue of "sentient" was brought up by Skeptical.

I don't think in any way how Doc was treated/revealed throughout the series counters or minimizes the "Data is not a toaster" decision in MoM (the most over-rated episode in Star Trek history). After all the hub-ub, Data wasn't proven sentient, he was granted permission to choose.

That specific term wasn't even brought up in 'Author Author" (VOY’s MoM episode).

Here is the ruling:

"ARBITRATOR: We're exploring new territory today, so it is fitting that this hearing is being held at Pathfinder. The Doctor exhibits many of the traits we associate with a person. Intelligence, creativity, ambition, even fallibility. But are these traits real, or is the Doctor merely programmed to simulate them? To be honest, I don't know. Eventually we will have to decide, because the issue of holographic rights isn't going to go away. But at this time, I am not prepared to rule that the Doctor is a person under the law. However, it is obvious he is no ordinary hologram and while I can't say with certainty that he is a person, I am willing to extend the legal definition of artist to include the Doctor. I therefore rule that he has the right to control his work. I'm ordering all copies of his holo-novels to be recalled immediately."

So I think ST has aptly dodged the "sentient" issue with Data and The EMH.
Mon, Oct 27, 2014, 6:48pm (UTC -6)
Thanks for the comments. I freely admit to not being as well versed in the Doc's story as others; this is my first time watching Voyager since it was on the air. So perhaps the Lifesigns episode does make it clear that he wasn't sentient at this point.

But I've been specifically watching for this since Caretaker, and he passed my Turing Test in this episode. There were some arguable points beforehand, namely his encouraging nature towards Kes' studies and his bruised ego when Lt. Extra refused to talk to him. But the first one could be seen as an outgrowth of his nature as the EMH (Kes' studies would improve his efficiency), and the second can be somewhat argued in that sense. But here? He chooses a name and, more importantly, chooses to avoid that name because it was associated with an emotional loss. There's no way to reconcile something that personal with programming for an EMH to me. That was very personal on his part, very emotional. How was it anything but sentience?

As an aside, Minuet never seemed to be sentient to me; everything she did was in accordance with the goal of keeping Riker in the holodeck. Vic is a bit trickier, but he was specifically programmed to be genial and to appear sentient (as in, specifically programmed to realize he was a holodeck character). The most questionable part of Vic was his sneaky way of getting Odo and Kira together; that seemed above and beyond what a program might do. But only maybe, so one can still assume Vic was not sentient.

By the way, one point I forgot to mention. In the episode where the Doctor gets transferred to the Alpha Quadrant and meets the EMH Mk 2, the new EMH seems awfully jealous of the Doctor's experiences, particularly regarding sex if I remember correctly. And this EMH was brand new and had no life experiences. Again, this is sounding like a character that is sentient right away, rather than one that can become sentient. Of course, this is the Mk 2 version, so maybe it's just more advanced programming. But then my complaint about the Federation creating a slave race would still apply to the new set of EMHs, even if it doesn't apply to this set.

In any case, if Lifesigns walks this episode back some, I will be a bit happier, although I still think the producers were too cavalier with this character. And I'm glad to see, as Yanks pointed out, that even the Federation seemed to view this particular EMH as unique, which suggests he did gain sentience rather than always had it (or gained it way too easily). While there may be some ethical concerns with dealing with mass producing a group of non-sentient programs that have the capability of becoming sentient, it's a different question than mass producing those that are sentient, so I'll let it slide. At least Beverly wasn't throwing a newly born sentient being at the Borg in First Contact.

I agree that Picardo's Doc is an absolute joy to watch, but I fear sometimes that the characterization itself is not that great. I enjoy watching him, even if I don't agree with the way the writers handle him. Personally, I always liked the Data episodes that help to highlight his IN-humanity, because that's what makes him different or unique. My impression at least is that there weren't too many of those shows for the Doctor (Latent Image being one I did like). Personally, I would have been happy watching an EMH that was completely against learning anything new or thinking like a person for a season or two. It'd be a refreshing change of pace to see a computer not want to become human for once.
Tue, Oct 28, 2014, 8:27am (UTC -6)
"But here? He chooses a name and, more importantly, chooses to avoid that name because it was associated with an emotional loss. There's no way to reconcile something that personal with programming for an EMH to me. That was very personal on his part, very emotional. How was it anything but sentience?"

A dog can mourn their owner but I'm not sure a dog is sentient.

I think that in this episode the Doctor became more than a toaster (it's definitely the first real step on the journey I mentioned), but I'm not sure I'd grant you sentience.
Tue, Oct 28, 2014, 8:31am (UTC -6)
And I do get your point about a slave race and that the EMH Mk2 having... "desire for improvements" out of the box may have been problematic.

I will also throw out there that it is possible (although never mentioned) that Voyager's EMH can only become sentient because of the bio gel packs. Some of his circuitry is biological. I'm actually kind of sad they never went there with that.
Tue, Oct 28, 2014, 8:36am (UTC -6)
I've always thought Data was sentient. Data was a commissioned officer in Star Fleet and should have had all the rights along with the responsibilities that come with that. One of the reasons I'm not a MoM super-fan like most. The trial should have never happened. You can read my review here on the MoM page if you want further elaboration.

But the question of "Sentience" is quite the discussion.

From MoM [TNG]:
"PICARD: Commander, would you enlighten us? What is required for sentience?
MADDOX: Intelligence, self awareness, consciousness."

From Webster:
"1: responsive to or conscious of sense impressions
2: aware
3: finely sensitive in perception or feeling"

I would say Data has demonstrated throughout the series that he has met that criteria. With #3 and "consciousness" being debatable as Data has stated many times he has no feelings.

So I guess the big question mark when talking about the EMH is do these definitions reveal him to be sentient?

I'm not so sure. He is a computer program, while data has the positronic brain. I think I see a difference there.

Topic for another day I guess :-)

But I find it a little surprising that Trek never really took a stance on this subject.
Tue, Oct 28, 2014, 10:38am (UTC -6)
@Robert, re: Gel packs :

It's a good thing they didn't go down that road, because it would weaken the precedent set in MoM, Quality of Life and the Doc's arc (especially "Flesh and Blood") that AI is just as valid as organic life. I think it's a stronger argument to say that what makes a life truly worthwhile is not whatever endowments are bestowed by a lifeform's creator (be it parent, programmer or divinity), but how those endowments are put to use. That idea is fully embraced in "Latent Image," vis-à-vis "La VIta Nuova."
Tue, Oct 28, 2014, 10:44am (UTC -6)
I didn't mean that he should have been brought to life via the gel packs because life can only be organic, I just meant that if, in THIS case the Voyager Doctor was special because the organic circuitry components had bestowed a uniqueness unto him in the vein of Data's positronic brain... it would have made the slavery issue with the other EMHs less disturbing.

And our EMH is not always in contact with the organic gelpacks (like when he's in the mobile emitter) but maybe they could have been the thing that gave him the so called "spark of life".

It would of course weaken some S7 storylines (like Flesh and Blood and Author Author) but I think S1-S6 would hold up just fine under those conditions (and then S7 would have had to be a bit different).
William B
Tue, Oct 28, 2014, 11:14am (UTC -6)
I'm not sure why it's a storytelling problem if the EMHs are sentient, or are capable of becoming sentient in a really short period of time. It is certainly true that it paints the Federation in a negative light -- but that makes quite a powerful point, which is that through ignorance it is possible for people to be complicit in horrible acts. The title, "The Measure of a Man," has a double meaning (at least); it is not just about whether Data is a person, but about whether the Federation, as represented by Picard, can recognize the dangerous patterns they can fall into, and can do the right thing regardless of how hard it is. The measure of a man is partly his ability to correct himself when he finds that he has, through ignorance or insensitivity (or, indeed, malice or deliberate wrong action, though these are not the case here), in other words; and such is the measure of societies, as well.

I think it's clear that no one *thought* they were making a sentient holoprogram with the EMHs, and I think that the EMHs' personality etc. were primarily just so that people could interact with them as if they were humans. However, as was pointed out in The Quality of Life, the fact that it was not the *intention* of the creators to create a sentient life form does not mean that this life form is not sentient. The Doctor doesn't immediately recognize his own potential, either, but comes to recognize it over time.

If the problem here is that the Federation should have corrected itself more extensively, and sooner, and its failure to do so is evidence that it is an evil organization and thus trashes the Roddenberry ideal, well, that's something to consider -- but I think it makes sense that it's really, *really* hard for the Federation (and indeed, for the Voyager crew) to properly identify the line between artificial creation with no internal life and sentient, self-determining being.
Tue, Oct 28, 2014, 1:06pm (UTC -6)
It's an ethical problem, not a storytelling one. And in some ways it's dealt with in Author, Author with the plight of the EMH MkIs.
Thu, Oct 30, 2014, 10:23pm (UTC -6)
Yanks: "[The Doctor] is a computer program, while Data has the positronic brain. I think I see a difference there."

So Doc's mind runs on the ship computer, while Data's runs on his personal computer in his head. This is a physiological difference between them, but not a philosophical one, as far as I can see. The *location* of a being's mind says nothing about its capacity for thought and experience.
Thu, Oct 30, 2014, 10:36pm (UTC -6)
Well Data's positronic net was supposedly so hard to replicate that nobody other than Soong was successful.

That being said, the Enterprise's computer accidentally made a sentient hologram in Moriarty.... so that fact that life could spring out of an EMH is far from difficult to imagine given Voyager is a more advanced ship and the pre-VOY canon supports such an accident anyway.

I've always liked the EMH as it connected to Data because of a few lines mentioned here and there ("The Offspring" and "Eye of the Beholder") about how hard it was for Data to transition into sentience. I actually feel like they paid a lot of that story off in Voyager. In a lot of ways it's an ongoing story that began with "Measure of a Man" and ran all the way to "Author, Author".
Fri, Oct 31, 2014, 9:03am (UTC -6)

"So Doc's mind runs on the ship computer, while Data's runs on his personal computer in his head. This is a physiological difference between them, but not a philosophical one, as far as I can see. The *location* of a being's mind says nothing about its capacity for thought and experience."

I think it's a little more than that. Doc can be rewritten at a whim. Data can not. When "Data" was dowloaded into B4, he reverted back to essentially a child. Doc on the other hand just pops himself into whatever computer or 29th century mobile emitter he can find.
Fri, Oct 31, 2014, 9:17am (UTC -6)
@Yanks - Well Data is more of a hardware program and Doc is more software. That said, you can probably make Mario entirely on a circuit board or a software exe and have it play exactly the same.
Fri, Oct 31, 2014, 12:05pm (UTC -6)
Not sure what you're saying Robert.
Andy's Friend
Fri, Oct 31, 2014, 1:29pm (UTC -6)
@Elliott, Peremensoe, Robert, Skeptikal, William, and Yanks

Interesting debate, as usual, between some of the most able debaters in here. It would seem that I mostly tend to agree with Robert on this one. I’m not sure, though; my reading may be myopic.

For what it’s worth, here’s my opinion on this most interesting question of "sentience". For the record: Data and the EMH are of course some of my favourite characters of Trek, altough I consider Data to be a considerably more interesting and complex one; the EMH has many good episodes and is wonderfully entertaining ― Picardo does a great job ―, but doesn’t come close to Data otherwise.

I consider Data, but not the EMH, to be sentient.

This has to do with the physical aspect of what is an individual, and sentience. Data has a body. More importantly, Data has a brain. It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.

Peremensoe wrote: ”This is a physiological difference between them, but not a philosophical one, as far as I can see.”

I cannot agree. I’m sure that someday we’ll see machines that can simulate intelligence ― general *artificial intelligence*, or strong AI. But I believe that if we are ever to also achieve true *artificial consciousness* ― what I gather we mean here by ”sentience” ― we need also to create an artificial brain. As Haikonen wrote a decade ago:

”The brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers.”

This is the main difference between Data and the EMH, and why this physiological difference is so important. Data possesess an artificial brain ― artificial neural networks of sorts ―, the EMH does not.

Data’s positronic brain should thus allow him thought processes somehow similar to those of humans that are beyond the EMH’s capabilities. The EMH simply executes Haikonen’s ”programmed strings of commands”.

I don’t claim to be an expert on Soongs positronic brain (is anyone?), and I have no idea about the intricate differences and similarities between it and the human brain (again: does anyone?). But I believe that his artificial brain must somehow allow for some of the same, or similar, thought processes that cause *self-awareness* in humans. Data’s positronic brain is no mere CPU. In spite of his very slow learning curve in some aspects, Data consists of more than his programming.

This again is at the core of the debate. ”Sentience”, as in self-awareness, or *artificial consciousness*, must necessarily imply some sort of non-linear, cognititive processes. Simple *artificial intelligence* ― such as decision-making, adapting and improving, and even the simulation of human behaviour ― must not.

The EMH is a sophisticated program, especially regarding prioritizing and decision-making functions, and even possessing autoprogramming functions allowing him to alter his programming. As far as I remember (correct me if I’m wrong), he doesn’t posses the same self-monitoring and self-maintenance functions that Data ― and any sentient being ― does. Even those, however, might be programmed and simulated. The true matter is the awareness of self. One thing is to simulate autonomous thought; something quite different is actually possessing it. Does the fact that the EMH wonders what to call himself prove that he is sentient?

Data is essentially a child in his understanding of humanity. But he is, in all aspects, a sentient individual. He has a physical body, and a physical brain that processes his thoughts, and he lives with the awareness of being a unique being. Data cannot exist outside his body, or without his positronic brain. If there’s one thing that we learned from the film ”Nemesis”, it’s that it’s his brain, much superior to B-4’s, that makes him what he is. Thanks to his body, and his brain, Data is, in every aspect, an independent individual.

The EMH is not. He has no body, and no brain, but depends ― mainly, but not necessarily ― on the Voyager computer to process his program. But more fundamentally, he depends entirely on that program ― on strings of commands. Unlike Data, he consists of nothing more than the sum of his programming.

The EMH can be rewritten at will, in a manner that Data cannot. He can be relocated at will to any computer system with enough capacity to store and process his program. Data cannot ― when Data transfers his memories to B-4, the latter doesn’t become Data. He can be shaped and modelled and thrown about like a piece of clay. Data cannot. The EMH has, in fact, no true personality or existence.

Because he relies *entirely* on a string of commands, he is, in truth, nothing but that simple execution of commands. Even if his program compels him to mimic human behaviour with extreme precision, that precision merely depends on computational power and lines of programming, not thought process.

Of course, one could argue that the Voyager’s computer *is* the EMH’s brain, and that it is irrelevant that his memories, and his program, can be transferred to any other computer ― even as far as the Alpha Quadrant, as in ”Message in a Bottle” and ”Life Line”.

But that merely further annihilates his individuality. The EMH can, in theory, if the given hardware and power requirements are met, be duplicated at will at any given time, creating several others which might then develop in different ways. However ― unlike say, Will and Thomas Riker, or a copy of Data, or the clone of any true individual ―, these several other EMHs might even be merged again at a later time.

It is even perfectly possible to imagine that several EMHs could be merged, with perhaps the necessary adjustments to the program (deleting certain subroutines any of them might have added independently in the meanwhile, for example), but allowing for multiple memories for certain time periods to be retained. Such is the magic of software.

The EMH is thus not even a true individual, much less sentient. He’s software. Nothing more.

Furthermore, something else and rather important must also be mentioned. Unless our scope is the infinite, that is, God, or the Power Cosmic, to be sentient also means that you can lose that sentience. Humans, for a variety of reasons, can, all by themselves and to various degrees, become demented, or insane, or even vegetative. A computer program cannot.

I’m betting that Data, given his positronic brain, could, given enough time, devolve to something such as B-4 when his brain began to fail. Given enough time (as he clearly evolves much slower than humans, and his positronic brain would presumably last centuries or even millennia before suffering degradation), Data could actually risk losing his sanity, and perhaps his sentience, just like any human.

The EMH cannot. The various attempts in VOY to depict a somewhat deranged EMH, such as ”Darkling”, are all unconvincing, even if interesting or amusing: there should and would always be a set of primary directives and protocols that would override all other programming in cases of internal conflict. Call it the Three Laws, or what you will: such is the very nature of programming. ”Darkling”, and other such instances, is a fraud. It is not the reflex of sentience; it is, at best, the result of inept programming.

So is ”Latent Image”. But symptomatically, what do we see in that episode? Janeway conveniently rewrites the EMH, erasing part of his memory. This is consistent with what we see suggested several times, such as concerning his speech and musical subroutines in ”Virtuoso”. Again, symptomatically, what does Torres tell the EMH in ”Virtuoso”?

― TORRES: “Look, Doc, I don't know anything about this woman or why she doesn't appreciate you, and I may not be an expert on music, but I'm a pretty good engineer. I can expand your musical subroutines all you like. I can even reprogramme you to be a whistling teapot. But, if I do that, it won't be you anymore.”

This is at the core of the nature of the EMH. What is he? A computer program, the sum of lines of programming.

Compare again to Data. Our yellow-eyed android is also the product of incredibly advanced programming. He also is able to write subroutines to add to his nature and his experience; and he can delete those subroutines again. The important difference, however, is that only Soong and Lore can seriously manipulate his behaviour, and then only by triggering Soongs purpose-made devices: the homing device in ”Brothers”, and the emotion chip in ”Descent”. There’s a reason, after all, why Maddox would like to study Data further in ”Measure of a Man”. And this is the difference: Soong is Soong, and Data is Data. But any apt computer programmer could rewrite the EMH as he or she pleased.

(Of course, one could claim than any apt surgeon might be able to lobotomise any human, but that would be equivalent to saying that anyone with a baseball bat might alter the personality of an human. I trust you can see the difference.)

I believe that the EMH, because of this lack of a brain, is incapable of brain activity and complex thought, and thus artificial consciousness. The EMH is by design able to operate from any computer system that meets the minimum requirements, but the program can never be more than the sum of his string of commands. Sentience may be simulated ― it may even be perfectly simulated. But simulated sentience is still a simulation.

I thus believe that the EMH is nothing but an incredibly sophisticated piece of software that mimics sentience, and pretends to wish to grow, and pretends to... and pretends to.... He is, in a way, The Great Pretender. He has no real body, and he has no real mind. As his programming evolves, and the subroutines become ever more complex, the illusion seems increasingly real. But does it ever become more than a simulacrum of sentience?

All this is of course theory; in practical terms, I have no problem admitting that a sufficiently advanced program would be virtually indistinguishable, for most practical purposes, from actual sentience. And therefore, *for most practical purposes*, I would treat the impressive Voyager EMH as an individual. But as much as I am fond of the Doctor, I have a very hard time seeing him as anything but a piece of software, no matter how sophisticated.

So, as you can gather by now, I am not a fan of such thoughts on artificial consciousness that imply that it is all simply a matter of which computations the AI is capable of. A string of commands, however complex, is still nothing but a string of commands. So to conclude: even in a sci-fi context, I side with the ones who believe that artificial consciousness requires some sort of non-linear thought process and brain activity. It requires a physical body and brain of sorts, be it a biological humanoid, a positronic android, the Great Link, the ocean of Solaris, or whatever (I am prepared to discuss non-corporeal entities, but elsewhere).

Finally, I would say that the bio gel idea, as mentioned by Robert, could have been interesting in making the EMH somehow more unique. That could have the further implication that he could not be transferred to a computer without bio gel circuitry, thus further emphasizing some sort of uniqueness, and perhaps providing a plausible explanation for the proverbial ”spark” of consciousness ― which of course would then, as in Data’s case, have been present from the beginning. This would transform the EMH from a piece of software into... perhaps something more, that was interwoven with the ship itself somehow. It could have been interesting ― but then again, it would also have limited the writing for the EMH very severely. Could it have provided enough alternate possibilities to make it worthwhile? I don’t know; but I can understand why the writers chose otherwise
Fri, Oct 31, 2014, 4:22pm (UTC -6)
Andy's Friend, good stuff. I think I can address much of what you say, in support of the idea that the Doctor (as I call him, distinct from his original EMH programming) as a sentient being. (I'm not deeply attached to the argument though, and have not seen all of Voyager, so I'm interested in counterarguments.)

For now I will just take one point, that of Data's BODY. Is this really a key aspect of his personhood, helping establish his sentience versus the Doctor? I say No:

Remember when Data's head was detached in "Time's Arrow." Suppose his body had been lost or destroyed. Then the head is found. Nobody in Starfleet has the ability to make another Soong-quality android body for him, at least not right away. So Geordi wires the head up to ship power, to whatever minimum support systems it needs to 'wake up' and talk to his Enterprise friends. If you like, we can even suppose that the head is mechanically damaged--his eyes, ears, and mouth don't work properly--so he needs external input/output devices hooked in too. But the positronic brain is intact.

Is Data no longer Data? Is he *no longer* sentient? Of course not! He's still alive, and everybody is glad... he's just suffered a kind of severely disabling injury, that he may eventually 'recover' from if a new body is built.

The PERSON = the MIND. If a mind fulfills the measure of a "man," it does not matter what kind of physical infrastructure is carrying it... if any.
William B
Fri, Oct 31, 2014, 4:31pm (UTC -6)
Ooh, okay. Very interesting post, Andy's Friend, continuing the discussion in an interesting way.

I think that this is a tough topic, and so rather than respond directly, I'm going to expand a bit on what my assumptions are for the story.

I think there are two ways to consider the sci-fi elements in Trekdom (and in most SF works generally). They aren't mutually exclusive entirely, and there is some overlap, but they are still somewhat distinct. One is to use the future setting to consider aspects of our human present and past, of what it means to be human, animal, alive. The other is to consider the implications of the rapid technological progress and/or the potential implications of what we might find in space. *Most* alien species are basically about humans, full stop; a very select few aliens in Trek are something like an attempt to imagine what a truly alien creature might be like.

With the artificially intelligent characters, who may or may not be capable of sentience as we see them, we can read them a few ways. I think that the story does mostly come down on the side that Data is sentient. We cannot guarantee that he is, for the same reason that we cannot guarantee that other humans are sentient; about as far as we can get with absolute certainty is Descartes' "I think, therefore I am," because what I experience as my own consciousness and existence may not exist in other humans. Fortunately for everyone, the vast majority of humans have little difficulty in taking that extra leap in recognizing that other people, who are so similar to us in biology, behaviour, etc., probably have inner lives somewhat similar to our own.

With non-human animals, things become trickier. Elephants can recognize marks on themselves in mirrors, and so that's viewed as something of a mark of sentience. But we can easily program simple mechanical or electrical systems to self-regulate without any need to call them sentient. For the most part, with the animal kingdom, we are willing to extend certain aspects of consciousness to them insofar as they reflect human behaviour, brain activity, and the like.

There are some AI in TOS, with the various sentient computers, and V'Ger is a big deal in TMP, but Data is the first long-term examination of this question. In a sense, even Moriarty just starts as a reflection of Data (appearing in a Data-centric episode, and whose second appearance in "Ship in the Bottle" leads to him being foiled by Data) and Data's "family" of Lore, Lal, and Juliana are iterations of him. The Exocomps are the other major figure.

Now if we take this as a simple story about humans, Data's presumptive sentience becomes this: if we view the universe in purely deterministic terms, and believe that the soul is not some kind of divine, God-granted spark, how do we know it exists at all? Data is one way of looking at the mechanical nature of man, head-on; he is like a human who recognizes that everything that goes on in his brain is a series of electrical impulses, and *knows* it to a degree where he can do self-diagnostics, and the like. He is self-aware in a frightening way. (There is some similarity to Watchmen's Dr. Manhattan, here.) The faith required to view Data as sentient, even though he might "just be" a mechanical replica of a person, programmed to act as if he is human, who lacks some of the traits that we closely associate with humanity (like emotion), is a way of coming to terms with our own partly mechanical nature, and recognizing that even if we are physical beings we still have something like, as Captain Luvois says, a soul. I'm not speaking about the divine spirit or whatever; I'm trying to find a word for sentience, consciousness, an inner life, etc.

On a technological level, this also has the second implication: if humans have sentience which is not the result of a divine spark or even necessarily something special about our biology, why *couldn't* a purely artificial, manmade life form, exist, which has just as much inner life as we humans do? The two points here -- the way in which humans are like Data, and the possibility that a mechanical man like Data could be produced who is like humans -- are interrelated.

Now, I don't know what it is that produces this inner life in humans; I think that animals, especially ones close to humans biologically, have something similar, though maybe to a different degree, but they are not really able to communicate with us, so it's not exactly easy. It's actually quite easy to recognize Data as being like a human, because ultimately, while he's got a funky positronic net which functions in ways we can't understand (because, obviously, the writers don't know how it works because they're writers, not 24th century scientists), he still has several traits we do associate with humanness. He looks human, acts kind of human, can communicate with humans, has a discrete body with a brain, located in his head, and which is based, in certain respects, on the human brain. Andy's Friend points out that it's only members of the Soong Family who can exert real control over Data's programming -- that's Soong and Lore. The Borg Queen manages to hack into it in First Contact, but the Borg exert similar control over human bodies because of their scary ability to manipulate biology and technology. I agree that this is important for the story, and I think that's partly because, as we know from "Datalore" onward, Data's very emotionlessness is at least partly because Soong's attempt to make an android as close as possible to a real human led to creating a murderous, vindictive monster (Lore), and Data is hamstrung a little by limitations built into him, which Soong is still working on fixing. Well, there's more to say about that some other time, but yeah, I agree that it's an interesting exception in Data's case rather than the rule; Soong (and Lore, who acts in his stead) can manipulate fundamental aspects of Data in the way that only families/formative influences can.

So if consciousness is not actually dependent on having a carbon-based, DNA-based, brain-with-axons-and-neurons-etc. body, as is suggested by Data, what other requirements can be dropped? The Exocomps in "The Quality of Life" are designed (by the writers) to be as weird, robotic, NON-human looking as possible, beings that humans have no particular investment in empathizing with the way we do automatically, I think, empathize with Data, if for no reason than that he's played by a human actor. I think there are implications to real biological life form issues of the day, too, in that episode, where Data takes an extreme position that the fact that we can't fully communicate with these beings doesn't mean that they aren't alive, and possibly even intelligent life, possibly deserving of the same rights as anyone. This is a real issue when it comes to animal life today -- it's an uncommon position, to be sure, that animals are just as sentient and just as deserving of the right to live as humans, and it's a position a bit too extreme for me at the moment. Still, I think the episode is partly about how humans naturally empathize with people who set off alarms as being Like Us, and who can communicate with us, and that's not the sole metric that we should use to determine the rights of another. From a tech perspective, the episode is asking whether our willingness to accept Data is just because Data was *designed* to be sentient; what if sentience really just...happens, without it being intended? Or what if it happens in a creature created by humans (or humanoids generally) looks and behaves in a way totally different from humans, but nevertheless has something of consciousness?

The holograms, starting with Moriarty and then going through the EMH and then the holograms in season seven, don't have the same issue as the Exocomps, but they take a similar position in terms of expanding the parameters under which sentience, inner life, etc., can be achieved. The Exocomps look totally inhuman, but are still discrete objects -- which is part of the trick, is that they can only be observed from the outside, especially since they are not, it seems, designed to communicate directly. The holograms are definitely designed to communicate directly, and are indeed human-looking. Yay. But they have no discrete body (their body is just projections of light), and no "brain" -- their processor is distributed throughout the ship's computer. And that's weird and funky and does that really make sense?

I guess if I'm willing to accept Data as sentient, I'm not sure why the EMH would be different. In his case, the "human" elements of the story are something similar to Data's story, though the perspective is slightly different. There are some pieces of info about slavery, human rights, and the difficult process of coming to terms with what, generations hence, will seem obvious: that people who *seem* remarkably different are actually quite similar, internally. In tech, it's the movement from hardware to software. And really, I'm not sure why the difference between hardware and software would be what determines that Data is sentient and the Doctor isn't. That the Doctor's program can be more easily altered is partly because thinking about the human brain more as software than hardware enables us to think of the human brain as more malleable without having to do physical changes -- the modifications to the Doctor in some ways are closer to behavioural conditioning, whereas the modifications to Data are more like radically altering his thought processes with drugs. But it is also because the Doctor starts off as presumptively Starfleet/Voyager *property*, in a way that Luvois ruled that Data wasn't in "The Measure of a Man." Data was Soong's creation, and it's only Soong, Juliana and Lore who feel like Data is actually their *family*, their child/little brother; Soong and Lore manipulate Data, and Soong and Data deactivate Lore. The Doctor is actually the property of Voyager, on some level, and that makes Janeway (and briefly Ransom, when he's hijacked) more directly his "parent" than Picard is of Data. Some of the stories for both Data and the Doctor are about a coming of age, but though Data's initial *personality* is in some ways more naive than the Doctor's, since the Doctor's personality is based off Zimmerman, the Doctor is the one who effectively grows from infancy to adulthood over the course of the series, with Data entering the stage as something like an adolescent-orphan whose mostly-offscreen family still hold greater sway over him than one would expect. I think the difference between the Doctor being modifiable by the crew and Data's being modifiable mostly only by the Soong family is because the Voyager crew *are* in a sense the Doctor's family, if his adoptive family (with absentee father Zimmerman only making a few appearances, and only one in Voyager as himself rather than a hologram of himself).

All that said, the fact that the Doctor (and Data) state that they are sentient or alive is no proof that they are. Any C++ "Hello world" program that adds an extra line, "I am a sentient program!" *claims* to be sentient. Data and the Doctor, I think, pass the Turing test, but that is more a guarantee that their ability to mimic human (or sentient) behaviour is really exceptionally good than any guarantee that they are actually alive, or have internal life. The fact that Data and the Doc are compelling characters is no more an argument, either. I mean, the other big function of holodeck stories is to talk about fiction and what fiction does, in the shows; I think the Fair Haven storyarc, such as it is, is at least partly about the way fictional characters FIGURATIVELY come alive and become meaningful to us, RATHER THAN actually saying that fictional characters REALLY come alive. In our real-world frame, Data and the Doctor are fictional creations, and a smart enough, complex enough set of code could create characters that are more complex, and behave in a umber of different *fully determined* ways than fictional characters in games or choose-your-own-adventure stories or whatever can today, and it doesn't necessarily mean they're anything other than fictional creations.

So how do we determine if a being is really real, and has internal life, or is just really good at imitating it? Well, um...I don't know. I don't think that it's at all obvious that sentience or inner life is tied to biology, but it's not at all obvious that it's wholly separate from it, either. MAYBE at some point neurologists and physicists and biologists and so forth will be able to identify some kind of physical process that clearly demarcates consciousness from the lack of consciousness, not just by modeling and reproducing the functioning of the human brain but in some more fundamental way. Then we'll have an answer. For now, I think these stories have a certain amount of appeal both as a warning to take the rights of created life forms which may potentially have this inner life/sentience seriously, and to help us think critically about what it is about us that makes us special, which is probably not ultimately a matter of the exact type of hardware that we are running, though it may turn out that it's not entirely separate from it.
Fri, Oct 31, 2014, 6:30pm (UTC -6)
Holy crap. I'll come back when I have about an hour to digest :-)
Fri, Oct 31, 2014, 10:39pm (UTC -6)
@Yanks - What I was saying in regards to hardware/software is that there is no hardware program that cannot be recreated in software. If Voyager's computer were powerful enough Data's neural net could be run on an emulator in the same way that you can run NES Roms without the original chip set. If Voyager's computer is powerful enough to run a positron net emulator is for each of you to decide, but if Data's brain can exist as hardware it can exist as software. This is not a debatable point in computing (to my knowledge).

You can feel free to debate if it can run on Voyager's hardware of course (and therefore if the Doctor could never be as complex as Data). But the Enterprise S will most likely run a Data emulator. This sounds like it has some synergy with what some of you are saying but like Yanks I'll need to come back later and digest, except for me it'll be after bedtime!
Fri, Oct 31, 2014, 10:42pm (UTC -6)
And I was stating this in response to "I think it's a little more than that. Doc can be rewritten at a whim. Data can not. When "Data" was dowloaded into B4, he reverted back to essentially a child. Doc on the other hand just pops himself into whatever computer or 29th century mobile emitter he can find."

The EMH can only be transferred more easily because he's a software program and Data is a hardware program. That's not as big a difference as you might think. Eventually hardware and an OS will come along that's powerful enough to run an emulator that Data could be uploaded into and become a software program.
Fri, Oct 31, 2014, 10:53pm (UTC -6)
Since I'm still reading and not sleeping I will just say that reading Andy's post he also seems to be giving preferential treatment to Data based on the fact that he is a hardware program.

If I look at an h t t p:// in Data's head and compare it to a line of C++ that says A&&B I personally find no difference except for the fact that I can change the line of code more easily than I could rewire the chip. But I don't think we should judge a program's possible sentience on how easy it is to rewrite the program, should we? William mentions the Turing test and I think it's apt. If I could make a Data emulator it'd be able to pass or fail any sentience test that Data could pass fail. But it would be more easy to rewrite it.

I agree with William.
Andy's Friend
Sat, Nov 1, 2014, 1:14pm (UTC -6)

You're absolutely right about the body, of course, but I'm afraid you slightly misunderstood. It’s my fault, though: reading what I wrote again, I see that there are two instances where one can easily misunderstand me.

So I’ll explain: I wasn't talking about the body as the body, but rather as some physical, tangible medium necessary for the thought processes that could lead to (artificial) consciousness. I was thus, when referring to the body and the brain, using a formula of redundancy, or pleonasm, saying essentially the same thing twice with two different words: ”body and brain”. In Data’s case, we know it to be his positronic brain. But what of the Great Link, for example? Is there any difference between body and brain here?

But you must forgive me; this is a literary technique much used in old documents, and something that I have researched extensively for late 15th and early 16th century sources: "Your rights and privileges; your heirs and successors; your fortune and goods", and so on and so forth. All this might mean the different things mentioned ― there is a difference between say, heirs and successors ―, but quite often such pairs would be mere rhetorical devices to say the same thing twice and thus aggrandize the person in question.

(Off-topic: To give a famous example, Columbus was made ”Viceroy and Governor-General” by royal decree in 1492. But at this time this was the exact same thing, with a mere difference of semantics: the terms emphasize different qualities of the same: "Viceroy" emphasizes the representative role ― he’s a representative of the Monarch ―, while "Governor-General" emphasizes the active, administrative role: he’s a general governor. But at this time in history in Spain, there was no difference: the Viceroy also governed, and the Governor-General also represented the Monarch. It's a bit like "Minister" and "Secretary of State", or ”Marshall” and ”Constable”, at various times in history and in various countries. In the Crown of Aragon, for example, at this time the word "Vicerex" or viceroy is used in documents in Latin, but in documents in Catalan the word most often used was "Lloctinent" or lieutenant ― for the exact same individual, in the same capacity, as governor of say, Sardinia. The idea of exact nomenclatures is a fairly modern one, as a result of more advanced burocracies and standardization emerging especially in the 18th century. So, my dear and beloved friend, that was the explanation to my "body and brain" ;)

Now to answer: as I indicated, "It requires a physical body and brain of sorts, be it a biological humanoid, a positronic android, the Great Link, the ocean of Solaris, or whatever."

In the example I gave, I don't know where the Great Link's body ends and the brain begins, or even if there is such a differentiation (I'm almost certain there isn't, when you're dealing with beings that can exist as fog). But there is something... ― something else, beyond bits and bytes and strings of command that clearly must mean something for the possibility of thought process and consciousness.

Or, to respond directly in your words: the Doctor has no MIND. Please read my next answer also, to William B.
Andy's Friend
Sat, Nov 1, 2014, 1:43pm (UTC -6)
@William B, thanks for your reply, and especially for making me see things in my argumentation I hadn’t thought of myself! :D

@Robert, thanks for the emulator theory. I’m not quite sure that I agree with you: I believe you fail to see an important difference. But we’ll get there :)

This is of course one huge question to try and begin to consider. It is also a very obvious one; there’s a reason ”The Measure of a Man” was written as early as Season 2.

First of all, a note on the Turing test several of you have mentioned: I agree with William, and would be more categorical than him: it is utterly irrelevant for our purposes, most importantly because simulation really is just that. We must let Turing alone with the answers to the questions he asked, and search deeper for answers to our own questions.

Second, a clarification: I’m discussing this mostly as sci-fi, and not as hard science. But it is impossible for me to ignore at least some hard science. The problem with this is that while any Trek writer can simply write that the Doctor is sentient, and explain it with a minimum of ludicrous technobabble, it is quite simply inconsistent with what the majority of experts on artifical consciousness today believes. But...

...on the other hand, the positronic brain I use to argue Data’s artificial consciousness is, in itself, in a way also a piece of that same technobabble. None of us knows what it does; nobody does. However, it is not as implausible a piece of technobabble as say, warp speed, or transporter technology. It may very well be possible one day to create an artificial brain of sorts. And in fact, it is a fundamental piece in what most believe to be necessary to answer our question. I therefore would like to state these fundamental First and Second Sentences:

1. ― DATA HAS AN ARTIFICIAL BRAIN. We know that Data has a ”positronic brain”. It is consistently called a ”brain” throughout the series. But is it an *artificial brain*? I believe it is.

2. ― THE EMH IS A COMPUTER PROGRAM. I don’t belive I need to elaborate on that.

This is of the highest order of importance, because ― unlike what I now see Robert seems to believe ― I think the question of ”sentience”, or artificial consciousness, has little to do with hardware vs software as he puts it, as we shall see.

Now, I’d like to clarify nomenclature and definitions. Feel free to disagree or elaborate:

― By *brain* I mean any actual (human) or fictional (say, the Great Link) living species’ brain, or thought process mechanism(s) that perform functions analogous to those of the human brain, and allow for *non-linear*, cognitive processes. I’m perfectly prepared to accept intelligent, sentient, extra-terrestrial life that is non-humanoid; in fact, I would be very surprised if most were humanoid, and in that respect I am inclined to agree with Stanilaw Łem in “Solaris”. I am perfectly ready to accept radial symmetric lifeforms, or asymmetric, with all the implications to their nervous systems, or even more bizarre and exotic lifeforms, such as the Great Link or Solaris’ ocean. I believe, though, that all self-conscious lifeforms must have some sort of brain, nervous system ― not necessarily a central nervous system ―, or analogues (some highly sophisticated nerve net, for instance) that in some manner or other allows for non-linear cognitive processes. Because non-linearity is what thought, and consciousness ― sentience as we talk about it ― is about.

― By *artificial brain* I don’t mean a brain that faithfully reproduces human neuroanatomy, or human thought processes. I merely mean any artificially created brain of sorts or brain analogue which somehow (insert your favourite Treknobabble here ― although serious, actual research is being conducted in this field) can produce *non-linear* cognitive processes.

― By *non-linear* cognitive process I mean not the strict sense of non-linear computational mechanics, but rather, that ineffable quality of abstract human thought process which is the opposite of *linear* computational process ― which in turn is the simple execution of strings of command, which necessarily must follow as specified by any specific program or subroutine. Non-linear processes are both the amazing strength and the weakness of the human mind. Unlike linear, slavish processes of computers and programs, the incredible wonder of the brain as defined is its capacity to perform that proverbial “quantum leap”, the inexplicable abstractions, non-linear processes that result in our thoughts, both conscious and subconscious ― and in fact, in us having a mind at all, unlike computers and computer programs. Sadly, it is also that non-linear, erratic and unpredictable nature of brain processes that can cause serious psychological disturbances, madness, or even loss of consciousness of self.

These differences are at the core of the issue, and here I would perhaps seem to agree with William, when he writes: ”I don't think that it's at all obvious that sentience or inner life is tied to biology, but it's not at all obvious that it's wholly separate from it, either. MAYBE at some point neurologists and physicists and biologists and so forth will be able to identify some kind of physical process that clearly demarcates consciousness from the lack of consciousness, not just by modeling and reproducing the functioning of the human brain but in some more fundamental way.”

I agree and again, I would go a bit further: I am actually willing to go so far as to admit the possibility of us one day being able to create an *artificial brain* which can reproduce, to a certain degree, some or many of those processes ― and perhaps even others our own human brains are incapable of. Likewise, I am prepared to admit the possibility of sentient life in other forms than carbon-based humanoid. It is as reflections of those possibilities that I see the Founders, and any number of other such outlandish species in Star Trek. And it is as such that I view Data’s positronic brain ― something that somehow allows him many of the same possibilities of conscious thought that we have, and perhaps even others, as yet undiscovered by him. Again, I would even go so far as not only to admit, but to suppose the very real possibility of two identical artificial brains ― say, two copies of Data’s positronic brain ― *not* behaving exactly alike in spite of being exact copies of each other, in a manner similar to (but of course not identical to) how identical twins’ brains will function differently. This analogy is far from perfect, but it is perhaps the easiest one to understand: thoughts and consciousness are more than the sum of the physical, biological brain and DNA. Artificial consciousness must also be more than the sum of a artificial brain and the programming. As such, I, like the researchers whose views I am merely reflecting, not only expect, but require an artificial brain that in this aspect truly equals the fundamental behaviour of sentient biological brains.

It is here, I believe, that Robert’s last thoughts and mine seem to diverge. Robert seems to believe that Data’s positronic brain is merely a highly advanced computer. If this is the case, I wholly agree with his final assessment.

If not, however, if Data’s brain is a true *artificial brain* as defined, what Robert proposes is wholly unacceptable.


Data’s brain is never established as a true artificial brain. But it is never established a merely highly advanced computer, either. It is once stated, for instance, that his brain is “rated at...” But this means nothing. This is a mere attempt at assessing certain faculties of his capacities, while wholly ignoring others that may as yet be underdeveloped or unexplored. It is in a way similar to saying of a chess player that he is rated at 2450 ELO: it tells you precious little about the man’s capacities outside the realm of chess.

We must therefore clearly understand that brains, including artificial brains, and computers are not the same and don’t work the same way. It is not a matter of orders of magnitude. It is not a matter of speed, or capacity. It is not even a matter of apples and oranges.

I therefore would like to state my Third, Fourth, Fifth and Sixth Sentences:

3. ― A BRAIN IS NOT A COMPUTER, and vice-versa.


5. ― A COMPUTER IS INCAPABLE OF THOUGHT PROCESSES. It merely executes programs.

6. ― A PROGRAM IS INCAPABLE OF THOUGHT PROCESSES. It merely consists of linear strings of commands.

Here is finally the matter explained: a computer is merely a toaster, a vacuum-cleaner, a dish-washer: it always performs the same routine function. That function is to run various computer programs. And the computer programs ― any program ― will always be incapable of exceeding themselves. And the combination computer+program is incapable of non-linear, abstract thought process.

To simplify: a computer program must *always* obey its programming, EVEN IN SUCH CASES WHEN THE PROGRAMMING FORCES RANDOMIZATION. In such cases, random events ― actions and decisions, for instance ― are still merely a part of that program, within the chosen parametres. They are therefore only apparently random, and only within the specifications of the program or subroutine. An extremely simplified example:

Imagine that in a given situation involving Subroutine 47 and a A/B Action choice, the programming requires that the EMH must:

― 35% of the cases: wait 3-6 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 10-15 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 10% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose RANDOMLY.
― 5% of the cases: wait 60-90 seconds as if considering Actions A and B, then choose RANDOMLY.
― 6% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 10-15 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 3-6 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47

In a situation such as this simple one, any casual long term observer would conclude that the faster the subject/EMH took a decision, the more likely it would be the right one ― something observed in most good professionals. Every now and then, however, even a quick decision might prove to be wrong. Inversely, sometimes the subject might exhibit extreme indecision, considering his options for up to a minute and a half, and then having even chances of success.

A professional observer with the proper means at his disposal, however, and enough time to run a few hundred tests, would notice that this subject never, ever spent 7-9 seconds, or 16-19 seconds before reaching a decision. A careful analysis of the response times given here would show results that could not possibly be random coincidences. If it were “Blade Runner”, Deckard would have no trouble whatsoever in identifying this subject as a Replicant.

We may of course modify the random permutations of sequences, and adjust probabilities and the response times as we wish, in order to give the most accurate impression of realism compared to the specific subroutine: for a doctor, one would expect medical subroutines to be much faster and much more successful than poker and chess subroutines, for example. Someone with no experience in cooking might injure himself in the kitchen; but even professional chefs cut themselves rather often. And of course, no one is an expert at everything. A sufficiently sophisticated program would reflect all such variables, and perfectly mimic the chosen human behaviour. But again, the Turing test is irrelevant:

All this is varying degrees of randomization. None of this is conscious thought: it is merely strings of command to give the impression of doubt, hesitation, failure and success ― in short, to give the impression of humanity.

But it’s all fake. It’s all programmed responses to stimuli.

Now make this model a zillion times more sophisticated, and you have the EMH’s “sentience”: a simple simulation, a computer program unable to exceed its subroutines, run slavishly by a computer unable of any thought processes.

The only way to partially bypass this problem is to introduce FORCED CHAOS: TO RANDOMIZE RANDOMIZATION altogether.

It is highly unlikely, however, that any computer program could long survive operating a true forced chaos generator at the macro-level, as opposed to limited forced chaos to certain, very specific subroutines. One could have forced chaos make the subject hesitate for forty minutes, or two hours, or forever and forfeit the game in a simple position in a game of chess, for example; but a forced chaos decision prompting the doctor to kill his patient with a scalpel would have more serious consequences. And many, many simpler forced chaos outcomes might also have very serious consequences. And what if the forced chaos generator had power over the autoprogramming function? How long would it take before catastrophic failure and cascading systems failure would occur?

And finally, but also importantly: even if the program could somehow survive operating a true forced chaos generator, thus operating extremely erraticly ― which is to say, extremely dangerously, to itself and any systems and people that might depend on it ―, it would still merely be obeying its forced chaos generator ― that is, another piece of strings of command.

So we’re back where we started.

So, to repeat one of my first phrases from a previous comment: “It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.” And the matter is, that the EMH simply *does not think*. The program simulates realistic responses, based on programmed responses to stimuli. That’s all. This is not thought process. This is not having a mind.

So it follows that I don’t agree when Peremensoe writes what Yanks also previously has commented on: "So Doc's mind runs on the ship computer, while Data's runs on his personal computer in his head. This is a physiological difference between them, but not a philosophical one, as far as I can see. The *location* of a being's mind says nothing about its capacity for thought and experience."

The point is that “Doc” doesn’t have a “mind”. There is therefore a deep philosophical divide here. The kind of “mind” the EMH has is one you can simply print on paper ― line by line of programming. That’s all it is. You could, quite literally, print every single line of the EMH programming, and thus literally read everything that it is, and learn and be able to calculate its exact probabilities of response in any given, imaginable situation. You can, quite literally, read the EMH like a book.

Not so with any human. And not so, I argue, with Data. And this is where I see that Robert, in my opinion, misunderstands the question. Robert writes: “Eventually hardware and an OS will come along that's powerful enough to run an emulator that Data could be uploaded into and become a software program”. This only makes sense if you disregard his artificial brain, and the relationship between his original programming and the way it has interacted with, and continues to interact with that brain, ever expanding what Data is ― albeit rather slowly, perhaps as a result of his positronic brain requiring much longer timeframes, but also being able to last much longer than biological brains.

So I’ll say it again: I believe that Data is more than his programming, and his brain. His brain is not just some very advanced computer. Somehow, his data ― sensations and memories ― must be stored and processed in ways we don’t fully understand in that positronic brain of his ― much like the Great Link’s thoughts and memories are stored and processed in ways unknown to us, in that gelatinous state of theirs.

I therefore doubt that Data’s program and brain as such can be extracted and emulated with any satisfactory results, any more than any human’s can. Robert would like to convert Data’s positronic brain into software. But who knows if that is any more possible than converting a human brain into software? Who knows whether Data’s brain, much like our own, can generate thought processes that are inscrutable and inexplicable that surpass its construction?

So while the EMH *program* runs on some *computer*, Data’s *thoughts* somehow flow in his *artificial brain*. This is thus not a matter of location: it’s a matter of essence. We are discussing wholly different things: a program in a computer, and thoughts in a brain. It just doesn’t get much more different. In my opinion, we are qualitatively worlds apart.
Sun, Nov 2, 2014, 12:28am (UTC -6)
Andy's Friend, I addressed the "body" part first because I thought that was the simplest point to knock out, upon which we would all agree. To wit: we all understand, I believe, that Data's consciousness is *embodied* in his positronic brain. That's his mind; that's where his meaningful personhood actually resides.

OK. So the core question for the EMH/Doctor, then, is: how and why is the embodiment of *his* mind, his personhood, in the malleable (NOT fixed-line-programming) matrix of the Voyager ship "computer" fundamentally different?
Sun, Nov 2, 2014, 12:46am (UTC -6)
Perhaps I should note that I perceive no fundamental reason why a human mind could *not* be transcribed in a form of self-malleable "software"--the execution of which would manifest thoughts, and experience consciousness, of equivalent qualia as the biological brain's. Given sufficient as-yet-undiscovered scientific knowledge and technical ability, of course.

Unless, of course, one subscribes to a supernatural/religious view of what is required for the breath of sentience into a being.
Mon, Nov 3, 2014, 9:14am (UTC -6)
@Andy's Friend - I see your point, but yes I guess we differ on what makes a brain a brain. "Brain, brain, what is brain?" (sorry, I had to)

A neuron takes an electrical input and fires an electrical output. I tend to think this is all Data's brain does (except with mechanical hardware instead of biological hardware). So regardless of if the EMH is as sentient as Data, I actually think we can eventually make a software brain and a mechanical one.

Now if we'd be able to upload to it? I have no idea!
Fri, Nov 7, 2014, 12:25pm (UTC -6)
Andy's Friend,

Wow, what great discussion. Not sure what to say other than I agree. You've made the case that holograms aren't sentient.

(hope you are on my side in Enterprise too :-) )

Looking forward to reading more of your posts.
Fri, Nov 7, 2014, 12:31pm (UTC -6)
@ Andy's Friend :

There's obviously a lot to be said in response to you--which I'll get to in another moment, but for now it's worth pointing out what I see as a fundamental flaw in your way of thinking :

"2. ― THE EMH IS A COMPUTER PROGRAM. I don’t belive I need to elaborate on that.

This is of the highest order of importance,"

There is a massive dissonance between laying a bare assumption on the table as though it were a logical axiom (needing no explanation) and then claiming that this assumption is at the very heart of your argument. Your premise needs to be proved FIRST, before you spin out your arguments.

As I said, there's much more to say in reply here, but I'll get to that later.
Tue, Nov 11, 2014, 11:45am (UTC -6)
Okay, Andy's Friend, let's get nitty gritty :

"1. ― DATA HAS AN ARTIFICIAL BRAIN. We know that Data has a ”positronic brain”. It is consistently called a ”brain” throughout the series. But is it an *artificial brain*? I believe it is.

2. ― THE EMH IS A COMPUTER PROGRAM. I don’t belive I need to elaborate on that."

Beyond my previous point, these are arbitrary distinctions; one can easily say that the EMH has an artificial brain, either housed within the ship or a technology nearly a millennium more advanced than ours (the mobile emitter) - OR - one can say that the EMH has an artificial brain that is non-corporeal, "cloud-based" to use a contemporary model - OR - one can say that Data is also a computer programme. We have seen Data's consciousness (for an anthropomorphic choice of words) subverted several times, but his body remains. True, in MoM, Data is concerned that his consciousness will be corrupted or even lost if removed from its Positronic housing, but that is only because Soong's technology is not yet fully understood by Starfleet, and thus mucking about with it would be dangerous. The EMH on the other hand was designed to be compatible with Starfleet technology.

"But it’s all fake. It’s all programmed responses to stimuli.

Now make this model a zillion times more sophisticated, and you have the EMH’s “sentience”: a simple simulation, a computer program unable to exceed its subroutines, run slavishly by a computer unable of any thought processes."

You are making a big assumption here (and throughout your post) that the human mind somehow works differently, but, for all your ample detailing of computer logic, you fail to provide any evidence that human minds work differently.

"You can, quite literally, read the EMH like a book.

Not so with any human. "

Why not? I fail to see any evidence that a human mind cannot be reduced to a compendium of data (or Data :) )--true it would be extremely long and complex--think about the VOY episode "Imperfection" where 7of9's biological functions can be regulated by the ship's computer for several seconds, but not longer (she needed her advanced Borg-tech device for that). Point being, the Borg developed technology which was powerful enough to regulate the functions (even neural functions) of a biological being, thus reducing that being's consciousness to a series of numbers. This is equally fascinating and horrifying, but it provides the context for my assertion that your separation of Data's mind and the Doc's is an arbitrary one of your own choosing, not empirically derived.

"I believe that Data is more than his programming, and his brain."

That is an article of faith. I have no wish to rob you of that belief, but it alone does not constitute an argument for treating the Doc differently from Data, or either differently from humans.

I could say to you "I believe that a woman is less than a man," and cite various difference in biology, as well as discuss the "essences" of either sex from the perspective of what ever belief-system led me to that feeling. I sure as hell would hope that an advanced society would look at my statement, pat me on the head, send me away, and in no way subjugate a single woman on the basis of my belief.
Wed, Nov 12, 2014, 10:59am (UTC -6)

I'm not with you here.

"Why not? I fail to see any evidence that a human mind cannot be reduced to a compendium of data (or Data :) "

Provide one instance where this successfully happened in trek.

Also, a main reason I see data as sentient is that he is unique. Even though there is a Lore and a "B4", they are not data. You just can't take "data" and download him into a Ipad or computer (or mobile emitter) like you can the EMH. I also don't see enough information to make the assumption that the programming in Data and the EMH are remotely the same. You had to modify Data physically to change him (emotion ship). Doc is always just a couple key-strokes away from being soneone totally different.

In my view there is plenty of data that supports that these two are totally different even though they can both show similarities when dealing with situations, etc.
Wed, Nov 12, 2014, 11:24am (UTC -6)
@Yanks :

"Provide one instance where this successfully happened in trek."

1. TNG "The Schizoid Man" (Ira Graves)
2. TNG "The Nth Degree" (Lt Barclay)
3. DS9 "Move Along Home" (Sisko, Kira, Dax, Bashir)

Besides this, there is no need to cite instances, because I said I see no evidence that it is not possible to do.

"Also, a main reason I see data as sentient is that he is unique."

Okay, well setting aside the fact that uniqueness is not a part of any definition of sentience of which I'm aware, the EMH, as soon as it begins processing input, is also unique. I think some of us are getting caught up in the notion of physicality (Data's "brain" could have just as easily been stored in his ass as his head--wouldn't that make for some amusing scenes with Geordi?)

"Doc is always just a couple key-strokes away from being soneone totally different."

Not any more or less than you are a spine-splitting car-crash away from being someone totally different.
Wed, Nov 12, 2014, 2:08pm (UTC -6)
To further side with Elliott here, in "Our Man Bashir" the minds of the away team was stored in the station computer. It took up so much space that it required wiping the computer, but they were still "them" when it was fixed.

"ODO: So if their physical bodies are stored here, where are their brain patterns?
QUARK: Everywhere else. Their brain patterns are so large that they're taking up every bit of computer memory on the station. Replicator memory, weapons, life supports."

Granted it was all technobabble to get a fun romp where O'Brien wears an eye patch and Kira gets to speak in a bad Russian accent... but still, I think Trek lore and cannon as a whole side with Elliott.
Wed, Nov 12, 2014, 6:39pm (UTC -6)
I've been sided :-)
Mon, Dec 22, 2014, 12:34am (UTC -6)
holy smokes, long comments on this one :-)

I didn't like it. silly how this tentacle light being drags them in. Once again, poor stupid Harry.

Paced boring. Zzzz
Sat, Jan 3, 2015, 6:33pm (UTC -6)
There is no point arguing about brain patterns in a computer. Biological brains don't work that way. We know that already (yes, we know it as a fact). And the logistics of it, not to mention the ethical questions that would bring up even if it were possible, are enormous.

This episode works better technically if you are very naive to biology and science, or if you are able to ignore the silliness. I am not able to do that.

And I know I say this all the time, but as always, Trek gets entertainment right, but does so lazily. It's yet another mediocre "broken holodeck" episode.

Sat, Jan 3, 2015, 9:38pm (UTC -6)
@DLPB - I'm actually agree with you here (shocking I know). But the point is that in Trek canon you CAN store brain patterns in the computer. It's a known fact. It being stupid was not what we were discussing.
Sat, Jan 3, 2015, 10:02pm (UTC -6)
Although I will point out that while I don't think the episode I cited, "Our Man Bashir", would work (ie that you couldn't transfer back and forth) I'm not convinced that you couldn't transfer some piece of yourself into a machine.
Sun, Jan 4, 2015, 7:36am (UTC -6)
I am not sure why you think that excuses bad writing and bad science.
Wed, Aug 5, 2015, 12:43pm (UTC -6)

In the 'Schiziod Man' Graves downloads himself into Data. Once Graves has deposited himself into the ship's computer the conscious human element was lost. This illustrates my point that this has not successfully been done and that Data is unique. It's obviously more than just a "compendium of data" as the "human element" was lost in the computer and not in Data.

In the 'The Nth Degree', Barclay under alien influence now acts as the computer. This does nothing to further your argument.

'Move Along Home' - what??? Do I waste time researching this one too?

Data and the EMH are different.

So, I have been vindicated by doing a little research instead of just blurting out "evidence".


I like this episode just fine, as just about all EMH centric episodes. This one certainly has its cheesy moments.

Boy, Marjorie Monaghan sure does have a unique voice. I immediately remembered her in BAB5. I was reading up on her and found our she was considered when casting T'Pol. Also interesting that she hasn't done anything in entertainment since 2005?

I'll go 3 stars.

Ken, I can see you and I will have many "spirited" discussions when I get to Enterprise.
Diamond Dave
Fri, Dec 11, 2015, 3:01pm (UTC -6)
Bit of a misfire this, which picks the worst of the holodeck malfunction and unwitting interaction with a sentient alien species tropes of TNG. It does however feature the Doctor heavily, which is definitely a plus, and that story line is worthy of merit.

Otherwise, it's surprisingly talky for what should be an action adventure, and doesn't really hit home. 2 stars.
Sat, Jan 23, 2016, 9:21pm (UTC -6)
Thanks for this and all your other reviews, Jammer. Enjoy reading and sometimes re-reading them and the comments.
Mon, Aug 15, 2016, 2:42pm (UTC -6)
Good showcase for the holographic doctor. A little hokey but quite enjoyable.
Tue, Aug 16, 2016, 7:11pm (UTC -6)
Ironic that the most human and complex character on the show is a hologram.
Trek fan
Fri, Oct 14, 2016, 2:26am (UTC -6)
I have to disagree with Jammer's review; to call this uneven episode "good" due to its character study elements and give it 3/4 stars feels excessive. I would give it 2 1/2 stars, as it reminds me more strongly of one of those middling TNG episodes where a character experienced significant growth -- i.e. Data on the holodeck -- in the middle of an incredibly dull technobabble danger story. Honestly, most of this episode's running time consists of boring filler material, with only a handful of doctor scenes to lift the tedium with a glimmer of something better, and that glimmer isn't even terribly bright.

The problems with the episode are legion. First of all, the doctor doesn't even show up until the plot has nearly bored us to death, and he splits screen time with the photonic energy stuff for the remainder of the show. His scenes work okay overall, but they comprise a very small fraction of the running time, as the story devotes endless minutes of dead screen time to technobabble nonsense surrounding the obvious "misunderstood aliens" trope that we viewers figure out way before the characters. Secondly, Picardo actually is not very good in this episode, as his performance (especially compared to "Tinker Tenor" and later Doctor episodes) feels curiously low-energy and flat, signaling some uncertainty about how to play his scenes. Thirdly, Janeway's closing speech to the doctor in sickbay at episode's end feels pedantic, summarizing the story's events with a "Dr. Phil's final thought" vibe that feels hokey even by Voyager standards.

As we saw right away in the pilot episode, Janeway is given to making grand speeches to her crew, but not all of these speeches work well. The final speech in this episode, far from being one of her sharpest, comes across as simultaneously condescending and forgettable. Seriously, the writers need to stop writing these moralizing "what we learned from this" screeds for Janeway to deliver at the end of each episode, as they feel a bit forced at times. There's no need to force feed obvious lessons to viewers in this way.

In the end, while the doctor's first away mission to the holodeck is fitfully amusing and enlightening, there's just not quite enough of this "good material" to atone for the other 65% of the episode that is pedestrian at best.
J.B. Nicholson
Wed, Jan 4, 2017, 9:07pm (UTC -6)
It's not clear to me either precisely what are the rules of sentience in TNG/DS9/VOY Star Trek despite the importance sentience carries (sentience is a requirement for being given rights and responsibilities in the Federation). It's not clear to me that some computer-driven objects deemed sentient are genuinely deserving of rights when other computer-driven objects are not. It strikes me as a definitional problem where the definition of things is strategically set up so that some things qualify and others definitionally cannot. Put another way, it seems to me that the shows simply dictate that some are sentient (such as Cmdr. Data in that highly overrated TNG episode "Measure of a Man", and TNG's Moriarty, and Voyager's EMH is eventually given the respect of any other crewmember based initially on Kes' declaration it should be so in season 1 episodes) and some are said not to be (such as TNG's Cmdr. Maddox says about ship computers not being able to refuse a refit).

I get the impression the real difference comes down to physical appearance. Ultimately the argument is about flattering the humanoids who make the definitions; the more like a humanoid the sophisticated computer appears to be, the more likely it will be deemed sentient.

Submit a comment

Notify me about new comments on this page
Hide my e-mail on my post

◄ Season Index

▲Top of Page | Menu | Copyright © 1994-2017 Jamahl Epsicokhan. All rights reserved. Unauthorized duplication or distribution of any content is prohibited. This site is an independent publication and is not affiliated with or authorized by any entity or company referenced herein. See site policies.