Star Trek: The Next Generation

“Elementary, Dear Data”

3 stars.

Air date: 12/5/1988
Written by Brian Alan Lane
Directed by Robert Bowman

Review Text

Dr. Pulaski, ever the Bones clone looking for a Bones/Spock dynamic, challenges Data to an exercise in human improvisation: solve a Sherlock Holmes-style mystery that was not covered in the original source material. Is he capable of human insight beyond the Boolean logic of computer hardware? Geordi instructs the holodeck computer to create an original mystery with an adversary capable of defeating Data in a duel of wits.

Again we venture into the world of the period costume piece, a la first season's "The Big Goodbye," and like that episode, this one takes its time getting up to speed. I could've done with a little bit less of the Sherlock Holmes material and more of the sci-fi stuff. I think the story also makes a mountain of a molehill where Geordi's "slip of the tongue" is concerned. (Who cares if he instructed the computer to create an adversary that could "beat Data" as opposed to the fictional Holmes? The computer's sentient capability is the issue, not whether misspeaking one word can, or even does, cause it.)

Fortunately, the destination of "Elementary, Dear Data" is well worth the wait, and builds on the one moment of inspiration that "The Big Goodbye" had going for it: the idea that a computer program could become self-aware and grow beyond what it was designed to do. In this case, the intellect of Professor Moriarty (Daniel Davis) grows beyond the holodeck's parameters and is able to witness and participate in events outside its programming. The scene where he calls for the arch is an intriguing moment: We find ourselves asking, what does this mean? When he eventually is able to tie into the Enterprise's computer system and start shaking the ship, he gets Picard's attention.

What I like about this episode is its TNG sensibility. I could see Star Trek today using this as a gimmick solely for an action plot, but in 1988, the story exhibits a genuine curiosity about who Moriarty is now that he knows he's not part of the world he was created for. Picard and Moriarty have an exchange of dialog that's also an exchange of ideas, and they reach a peaceful resolution. It says a lot that Moriarty is willing to put his fate entirely in the hands of someone who could simply order his destruction in the interests of safety. But TNG was really about seeking out new forms of life, and this story highlights the series practicing what it preaches.

Previous episode: Where Silence Has Lease
Next episode: The Outrageous Okona

Like this site? Support it by buying Jammer a coffee.

◄ Season Index

Comment Section

98 comments on this post

    Just watched Elementary Dear Data on CBS action. The opening to this is horrendous, it really is. Geordi & Data discuss a model ship 'wind & sail!' and then the holodeck. It was more Sesame Street than Baker St. Embarrassing.

    I'm going to agree with both Jammer and Paul here. The beginning is the weakest link and the reason this ep deserves 3 or maybe 2 1/2 stars.

    But, I actually thought Geordi's "slip of the tongue" plot was a real nice touch. It was the first time (or at least the first time I did notice it) that something said by a character at the beginning ended up being important much later.

    The closer it was to completion the better the episode became. Moriarty was an interesting guy and the guest-actor didn't suck for once!

    Unlike Jammer and the other commenters, I enjoyed the opening to the episode -- well, maybe not the Geordi model ship scene. But I very much like the battle of wills between Pulaski and Geordi & Data over whether or not Data can use deductive logic combined with inspiration or is merely a machine. In fact, this part of the episode is tremendously important to the second half, because in his own way Moriarty is an answer to that question -- of whether a consciousness can emerge from pure computation and technology. We see Data functioning at a higher level than Pulaski believes he can, and in order to get an opponent who can defeat Data, the computer creates another being who functions at a higher level than expected.

    In the original script (which is available in a link from the Memory Alpha wiki page for the episode), there is a development wherein Data, by deduction, realized that Moriarty *could have* left the holodeck had he wanted to, because the piece of paper with a picture of the Enterprise did not fade away when Data took it off the holodeck. When Pulaski learns of this piece of deduction by Data, she is suitably impressed. While it's probably a good thing this piece of plot business didn't stay in -- it'd screw up future holodeck storylines in a big way, for example, and it's not such a great idea even within this episode -- I do think that losing the resolution to the Pulaski/Data plot hurts the episode. Data should have been more instrumental in resolving the episode's plotline. While the episode links Moriarty and Data's respective artificial intelligences -- Moriarty, I believe, compares himself directly to Data as computer-based life forms -- it would be a stronger episode if Data could prove himself more strongly in the second half of the episode and if the ep were clearer about the ways in which Moriarty and Data are similar phenomena.

    However, I think it's still a very strong episode, even if there is a bit of a missed opportunity. 3.5 stars from me.

    One other great thing about this episode: Geordi as Watson, Data as Holmes actually is a pretty good way to sum up their friendship -- Data the genius eccentric and Geordi his smarter-than-the-average-guy-but-relatively-speaking-everyman.

    The removing of the paper from the holodeck by Data proves that Moriarty could leave the environment. I assumed this was an inconsistency, not part of a deleted plot-line.

    @Moegreen

    Actually, according to the Star Trek: The Next Generation Companion by Larry Nemecek, it was an element of the original ending.

    This was the first TNG episode that I really liked. The costumes and sets are beautiful. I don't have anything against the model ship scene, though it's unrelated to the rest of the episode.

    A good episode even though it felt too much like two disconnected ones; Picard was absent in about the first half and then became the lead while Data barely spoke in the second half.

    One thing I noticed throughout this episode is the use of pieces of Bruce Broughton's _Young Sherlock Holmes_ score. As a fan of both the movie and the score, this was a nice touch.

    When Geordi and data are in the conference room explaining the sitaution to the senior staff Riker asks Geordi if there is a way to destroy the holographic images themselves. Geordi then proceeds to say he knows a way to shoot some beam that will destroy all holograms. Then Picard asks what about Pulaski and Geordi says well it will also tear apart human flesh as well. Lol. Why did Geordi even recommend that if he knew it would kill her? It seems like he's just trying to sound smart because he knows he made a mistake.

    This episode was okay.

    Was anyone else bothered by Data's and Geordi's bad British accents?

    I enjoyed the episode, though it was hard for me to get past the "slip of the tongue" device that leads to Moriarty's sentience. The plot would have been a good opportunity to reference the Bynars and the creation of Minuet. Geordi simply could have discovered that some of the Bynars' programming had remained buried in the Enterprise's computer after all, and the computer had drawn upon those resources to fashion an improved, "real" Moriarty.

    I've tried re-watching this episode, and I just can't bring myself to like it. I really can't stand these "holodeck goes haywire" episodes, where computer generated characters made up of photons and forcefields all of the sudden became "aware" and are a threat. Ridiculous.

    Ah, the sense of horror that arises when you recognise an episode title is going to mean something set on the holodeck!

    But this one really bucks the trend. For a start, it looks great. I really enjoyed the Pulaski-Data jousting - I can see why people didn't take to the no-nonsense doctor but she introduces an interesting dynamic for Data, who by this point I'm guessing many viewers are simply seeing as another human cast member, by keeping alive the question of what level of 'humanity' Data represents.

    And then the whole concept of Moriarty achieving sentience as a requirement of the computer's need to create a program to defeat Data is an intriguing one - one handled well, as Moriarty transcends his programming as an evil character and realises his own limitations and desire to live. 3 stars.

    Let me start by saying that I LIKE this episode: model ship, bad British accents, and Pulaski included. The model ship because it provides a bit of insight into Geordi's character, and I am ever a fan of character exposition. The bad accents lent to the element of fun, and it was also fun to watch Pulaski jump up from the sofa and brush crumbs from her dress when the Captain showed up in Moriarty's laboratory. Also you can tell she is thoroughly enjoying holodeck playtime.

    What I DIDN'T like was the lackluster, meaningless ending. Why would Moriarty give up control of the ship so easily? I realize that the computer had somehow granted him consciousness, but when did conscious equal conscience? What is it about sudden awareness of himself that turned him into something OTHER than Moriarty?

    Don't get me wrong, I love Daniel Davis's portrayal of the character: he totally sells the idea of a two-dimensional construct becoming a three-dimensional, sentient being (even if only on the holodeck). But I don't get why he would give away his advantage simply because Picard says "I don't want to kill you."

    Another thing, how would he get that advantage? How is it that the Captain of the ship can't override the command of a holographic simulacrum? Surely there would be fail-safes built into the system so that someone's holodeck jaunt doesn't wind up endangering the entire ship and crew..?

    And Troi's one line in the whole thing was utter nonsense.

    Good thing I have a healthy ability to suspend disbelief (and ignore Deanna Troi), because I really do like this episode. Honest.

    I, too, disagree with Jammer on the beginning half being the weak link here. Even on the nth viewing, I was thoroughly entertained by the sense of adventure present throughout the first two acts. The Data/Geordi dynamic is as strong as ever in this series and the addition of Pulaski's character to the story works surprisingly well. The plot builds up really nicely until the introduction of Moriarty, a potentially perfect villain who is sadly extremely underused. The writers probably realized this too when they decided to make a follow-up to this one.

    Yes, suspension of disbelief is required as always. Why are you watching a show about fantastic ideas if you're not willing to do that?

    My rating: 3.5 stars

    "I enjoyed the episode, though it was hard for me to get past the "slip of the tongue" device that leads to Moriarty's sentience. The plot would have been a good opportunity to reference the Bynars and the creation of Minuet. Geordi simply could have discovered that some of the Bynars' programming had remained buried in the Enterprise's computer after all, and the computer had drawn upon those resources to fashion an improved, "real" Moriarty."

    You touch upon the biggest weakness in the episode. It makes zero sense that the Enterprise computer would be capable of just conjuring up an AI due to a slip of the tongue as surely someone would have figured this out eons ago and there would have been safeguards in place.

    The plot focuses on the hologram as if it is the source of Moriarty's consciousness, even going to the foolish proposition that he could be "destroyed" by obliterating the holographic image! No, clearly the real issue is the computer itself and why it is capable of doing something apparently unprecedented!

    And as you mention, this is the biggest missed opportunity of the episode. There was a ready-made explanation, namely that the Bynar's had upgraded the computer, granting it vastly greater abilities and (for the first time) the capacity to create true AI. That should have been the first thing they discussed when the problem came to light. But in a baffling oversight / missed opportunity, they fail to mention it - leaving us with the ridiculous proposition that the Enterprise computer could always conjure AI, but... no one in the Federation tried to do it before?!

    Another thing I thought was funny with this episode, the way Data
    "deduced" instantly all these conclusions with no background or context and practically zero facts, even after the new simulation was made so that it was a completely new mystery. I mean he solves the murder of that man on the street in what, 8 seconds?

    I was just thinking, they should have Data do the Sherlock Holmes shtick all the time. Episodes like Conundrum and Cause and Effect wouldn't have lasted 90 seconds if Data was "deducing" like he did in this episode. Seriously, just put a pipe in his mouth and let him go.

    @Jason R., on the second point, Pulaski stated that she believed Data simply recognized plot points from various actual SH stories, and while I'm not totally familiar with the Holmes canon I thought that was the intent of the scene. It's not that Data is such an impossibly great investigator, but that his encyclopedic memory of Holmes stories allowed him to recognize each clue and figure out the story. I thought it was also a pretty good parody of how mystery fans react when encountering a new but formulaic mystery story, where they recognize the tropes and are sure going to tell everyone, not because of real world logic but mystery novel/play/show logic (where typically information only appears if it's a clue and there are X many red herrings etc).

    Jason R. "...in a baffling oversight / missed opportunity, they fail to mention it ["11001001"] - leaving us with the ridiculous proposition that the Enterprise computer could always conjure AI, but... no one in the Federation tried to do it before?!"

    "Missed opportunity" reminds me of "Booby Trap"/"Galaxy's Child" and the unexplored relationship between LaForge and the ship's computer. See, the computer wouldn't conjure AI for just anyone, but Geordi's wish is her pleasure. And he never thanked her!

    Latent robosexuality aside, the Data/Holmes mistake works as almost Asimovian logic. The computer does exactly what you tell it to, so be careful! We don't need to worry about Moriarty being more self-aware than the computer itself. He claims to be more, but that's the computer doing its role-playing.

    "@Jason R., on the second point, Pulaski stated that she believed Data simply recognized plot points from various actual SH stories, and while I'm not totally familiar with the Holmes canon I thought that was the intent of the scene. It's not that Data is such an impossibly great investigator, but that his encyclopedic memory of Holmes stories allowed him to recognize each clue and figure out the story."

    I was thinking of the scene *after* the Moriarty adversary was created. Data and Laforge are chasing after Pulawski's kidnapper and they stumble onto this completely different murder, which I guess the computer just threw in for kicks as a "side quest". Data instantly deduces that the man was a drunkard strangled by his angry wife with some beads or something. When Laforge calls him out on it (again supposing that Data was cheating by memorizing past Holmes plot devices) Data explains his deductions indicating that he was not cheating. So he didn't solve it by memorizing former Holmes stories. It was a completely new fact pattern.

    @Grumpy,

    I kind of like the idea of the computer being basically this djinn that will fulfill your wish (so be careful what you wish for!). It's a cool idea to be sure. Had they linked this new ability with the 11001001 episode I think it would have been even cooler (and made alot more sense!) because then there'd be this sense that the computer really is alot more mysterious and has these weird, previously unknown capabilities.

    I didn't mind the beginning but one thing that bothered me was the way Geordi storms off because Data already knows the resolution to Sherlock mysteries. If it was something Data invited him for, that would be one thing, but this was supposed to be his gift to Data. And he doesn't even seem to take issue with it not being challenging for Data's sake, just that he himself isn't having any fun.

    The only thing that bothers me is that in season 1 during Datalore, they act like a positronic brain; one capable of consciousness and original thought, was some groundbreaking discovery. In this episode, the holodeck literally just makes one up to create Moriarty.

    To be fair, Zg, the Moriarty AI is driven by a building-sized computer core, whereas Data's positronic brain fits in his skull. Granted, no one ever speaks of Soong's achievement as a feat of miniaturization.

    I agree that this episode improves as it nears its end.
    On the minus side we have Brent Spiner's truly awful Sherlock Holmes-it is the stuff of amateur high school plays.
    The terrible acting is complemented by Levar Burton and the guy playing Lestrade.
    At this stage I was going with the series average to date of 0 stars.

    The saving sequence comes at the denoument with Davis brilliantly portraying a hologram transcending his programming.
    Sorry but it doesn't really save this from the scrap heap in my book although the sequel Ship in a Bottle makes this first part worthwhile.

    1.5 stars from me -but only if we can put Spiner's Holmesian hamming it up to bed forever.

    Instead of asking Mr. Computer to create a Holmes-LIKE mystery with an adversary capable of defeating Data, why not just (have Data?) wipe Data's memory clean of all knowledge of Holmes' undertakings? Then he and Geordi could've had their fun, and, afterward, it would've taken, what, thirty seconds for Data to re-capture all of the novels?

    Because, BC--that would be a different experience, for a different person. Would you want part of your brain temporarily wiped, to increase the edited-version-of-you's appreciation of, say, a movie you've already seen?

    Peremensoe: But Data has no emotions so he wouldn't / couldn't actually care. Just erase the files. That would make it fun for Geordi.

    Of course he cares. There's nothing for him to enjoy, to be interested in, no reason to engage such hobby pursuits, if he doesn't care. Data certainly 'enjoys' intellectual challenges, if maybe not in the same qualitative way as a human would.

    In Ten Forward, Pulaski accuses Data of not being able to solve mysteries because Data is incapable of having an original thought or inspiration. Nonsense. Data has already proved Pulaski wrong by all of the mysteries that he’s previously helped the Enterprise solve. Of course, Pulaski hadn’t been around in Season One to see it. (Lucky her, considering the quality of most of those episodes!)

    Before Geordi ever entered his command for the computer to create an adversary worthy of Data, Moriaty sees the arch form and already seems to be scheming. The computer responds to Moriaty’s “command” for the arch, though the computer didn’t seem to produce the arch as part of Geordi’s wish for a “Data-worthy adversary,” but simply because Moriaty requested it. Moriaty learns everything to disrupt the Enterprise from the Computer: apparently he asks questions and the computer honestly answers. That the computer releases what has to be some classified secrets to a Data-worthy adversary for the sake of a holodeck program seems highly unlikely. Later Picard makes the point that he’s going to dress as a typical man of the time, so as not to give up any additional information to Moriarty. But when Picard meets Moriarty, Picard squeals like a stuck pig. And, as noted by others, as a very worthy adversary, Moriarty seems to give up awfully quickly.

    But considering the quality of some of the previous storylines, it’s an enjoyable episode with some fun moments and banter.

    I too feel this episode got off to a slow start - it was probably 2/3 of the way through that they realized holodeck is acting on its own. Some of this background is necessary of course but it would have been more interesting to see what else Moriarty could do from the holodeck.
    The actor playing Moriarty did a terrific job - he had that mischievous/almost evil look in his face.
    The ending with Picard/Moriarty was on a knife's edge for a moment until Moriarty gives in and lets Picard have control. I thought the concept of Moriarty gaining sentience (with suspension of disbelief on my part) was an interesting twist but it should have been linked to the episode with the Bynars.
    I'm a big fan of Sherlock Holmes, quite familiar with the canon and thought it was a good idea for a story to have Laforge/Data try something along those lines. This episode was entertaining with a good resolution at the end. I'd give it 3/4 stars.

    Great episode. They treated the universe of Holmes and Moriarty with respect. Data deducing an actual crime scene was interesting to see.

    Like the others mentioned, I'll never understand how they failed to link it with the Bynars. The lack of a fail safe is almost laughable in this regard.

    And why are there no user level privileges? Surely a command should be reversible by an admin or the captain.

    3 stars

    Very entertaining. I enjoyed the atmosphere and Moriarty felt like a true threat. The idea of telling the computer to create an adversary worthy of Data—not Holmes— was pretty nifty idea that showed how TNG could come up with some pretty fresh inventive ideas

    I've been on the HMS Victory - a huge, beautiful museum ship in the British town of Portsmouth - so seeing the Victory model on screen in the opening teaser was a treat. Regarding some criticisms voiced above: I think we just have to forgive the characters in season 1and 2 for having no idea how the holodeck works and what its implications are. It's something the writers themselves seem to be figuring out.

    The second half of this one is so much better than the first that it's embarrassing. Between Spiner's English accent, Geordi's overblown frustration at an android misunderstanding the point of a game, and Pulaski's insulting behavior, the first half plays little better than a show trying to play at something. But once Moriarty becomes the main character all of that is washed away and we get incredible sci-fi. I grew up as a kid with this Moriarty being the first I ever knew, and later on when I read the Conan Doyle books and saw other Holmes shows and films I thought that perhaps my always referring back to this Moriarty was just a childhood bias. But watching it again I realize that's not the case: this guy is just so good, so intelligent, and with such a nuanced understanding of his lines, that he blows all other portrayals I've seen out of the water. He's a villain to equal Holmes precisely because his goals are perhaps even more lofty: to understand everything. This Moriarty shares the life goal of Picard himself to an extent, which is I think why they ultimately come to an understanding. His grasp of reality doesn't just exceed the original program or even the computer, but in fact it causes him to exceed his own desires and shift them by the end. Picard doesn't win because he convinces Moriarty of anything; Moriarty advances to the point where "villain" ceases to have any meaning any more and the villain's goal of conquest - in the sphere of knowledge - verges towards being identical with that same goal of intellectual conquest shared by the Enterprise crew. Although it takes a sci-fi premise to get there, this Moriarty is superior to others not only because of the performance but because his intellectual needs are far higher than the mere need for wealth or power; he knows that knowledge is the ultimate power. Compared to him the Moriarty's we see in standard fare are little more than neighborhood criminals in the scope of their comprehension. And the fact that I can even think of if in these terms shows how imaginative the episode was.

    Regarding Geordi's slip-up, I think it's a big deal only insofar as the computer seemed to be taking serious Asimov's second law (to obey humans no matter what) but in the absence of the first and zeroth laws. It would have been decent to briefly get into the dangers of letting AI loose without strict guidelines, but I guess ultimately they went more for story than for abstract principles, which works I suppose.

    @Peter G., your comments on this episode's treatment of Moriarty are excellent, as usual. I want to stick up a bit for the episode's first half, however. I'll grant your point about Spiner's accent; bad accents usually don't bother me, but it's certainly hard to justify, except that I think that the hamminess with which Data plays Holmes is I think a deliberate choice to show that Data is not (yet) a particularly good performer, and that his enthusiasm for the character is getting the better of him. However, I think that the android should probably be able to do a reasonable English accent. On the other hand, I actually like the portrayal of Geordi and Pulaski. To whit:

    1. Geordi: You mention that Geordi's frustration at an android not getting the point of the game being overblown. This is true enough, in that Geordi's expectations for Data are unreasonable, and that he got angry over it is a little over-the-top. And yet I think it's really believable because Geordi doesn't quite see Data as an android. In The Next Phase, Data indicates that Geordi was the first person to treat him as a person, and I think Geordi's willingness to throw himself all-in into a friendship with a being who seems to be incapable of truly understanding him is an unusual and sort of unique trait, and one which occasionally will have him hit a wall. Geordi, in his enthusiasm for giving his best friend a gift, more or less forgot who his friend was, in part because he, unlike everyone else, somewhat forgets who he is. Data is a wonderful being and a good friend in most respects, but at a certain point I think it would be impossible not to be somewhat vexed by him, if one's expectations for his social skills and the like were not already pre-set to be low, and even then most people find themselves exasperated once in a while. Geordi's here is I think particularly because Geordi has invested in Data as his very best friend and closest companion, and even set himself up to be basically Data's *sidekick* (in the Holmes program), and he's suddenly confronted, as if for the first time, with what that would actually mean.

    Furthermore, I think his anger and frustration being uncontrolled is also partly because, in the moment he suddenly "remembers" that Data is an android, he also recognizes that Data can't be hurt emotionally in the same way others are, and he's maybe fishing for a kind of reaction from him. It's not that he wants to hurt Data, but he also realizes on some level that he's talking to a wall, in terms of emotional understanding, and that means he can let loose as much against that wall as is possible.

    While the show could have explored this element of Geordi and Data's friendship more -- what it would actually entail to have an android as a best friend -- I think that the show did explore Geordi's unique perspective on the interface between technology and personhood, with the Leah Brahms thing, with Hugh, with less successful outings like Aquiel and Interface even, and his habit of flying off the handle when his expectations for his tech-friends (Data, or perhaps the people like Brahms or his mother in Interface where he formed an image of the real version of them through a simulacrum which ends up deceiving him) are not met. His difficulty dealing with tech-minded colleagues who also fail to meet his expectations -- Barclay, Scotty -- also suggests something similar, perhaps.

    Anyway, I think the point here is that Geordi has not yet settled into loving Data as he is but treats him as much like any other friend because on some level he has not really let himself consider exactly how different Data is to him, and it's a bit of a shock to his system. By The Most Toys, Geordi's view of Data is more sophisticated, in that he cares very much about him *and* saves him because he notes that Data failed to abide by procedure, which is impossible *for Data*, and so there's a bit of an arc (albeit subtle and possibly not fully intended) where Geordi moves from unconsciously assuming Data should be treated like a person because he doesn't see Data as different from himself, to believing Data should be treated like a person because he's valuable, regardless of how different he is from everyone else.

    2. Regarding Pulaski, I dunno. As with Geordi, partly her willingness to be rude to Data is because she recognizes he doesn't feel injury at her rudeness in the same way most people do, but rather than acting out of disappointment she sees it as confirming her view of him. I like that Pulaski, alone among the cast, starts off with the assumption that Data is a machine and thus doesn't have personhood, whereas everyone else basically assumes Data's personhood except with some covert condescension and prejudice which occasionally comes out, or, in Geordi's case, have ignored or forgotten the fundamental differences between him and others. I'll grant that her attitude could have played out more subtly, but I dunno, what's the point in being subtle with a being who, to the extent that he exists at all, has no feelings to injure? Data does seem mildly wounded by Pulaski, but doesn't show it exactly, and even then he seems more perplexed by her attitude than anything, which does not really provide her any discouragement even to the extent with which she empathizes with him, which is not quite zero. By Measure of a Man she's firmly of the belief that he should have rights, and that's less than halfway into the season, and by Peak Performance she's overtly defending his needs for reassurance to Picard, and I get the sense that her willingness to risk offense to actually explore Data's condition leads to her gaining a better understanding of him, and faster, than the more distant attitude that some of the other crew treat him with early on.

    3. And really, both of these end up relating to the Moriarty story. Moriarty is in some senses a mirror of Data, but he actually seems capable of "defeating Data" in that he is able in some senses to outstrip him, to demonstrate the possibility of computational sentience with greater speed. Moriarty's ability to push beyond the game allows that it's possible for an AI to truly surpass his programming, but leaves somewhat open whether Data himself is at that level, perhaps because it's also early in the series and there is some openness left in the question of how much Data really can surpass his programming (and occasionally the question of whether it's dangerous for Data to surpass it is also addressed). I think that this element doesn't get quite resolved until the sequel/inverse in Ship in a Bottle, in which Data is the one to see through Moriarty's deception. In any case, that Data, Geordi and Pulaski decide to run an experiment to determine the limits of Data's ingenuity is really true to the spirit of the series and sets up this episode's plot in particular; what is more TNG than examining Data's abilities, with Geordi and Pulaski coming at his views from opposite ends, in what they believe to be a safe and consensual test? Maybe the execution could have been better, but even there I'm personally pretty fond of the way the episode plays things out.

    @ William B,

    My reaction to Geordi and Pulaski is more visceral than cerebral. On paper I know that it makes sense for Data to frustrate people. This is played up to good effect in later seasons when they're subjected to his poetry readings. In the case of Geordi my qualm is almost more with the performance and somewhat skimpy dialogue in the scene when Data 'solves' the mystery before even doing any investigating. Geordi just walks off in a huff basically without even explaining himself, and in any other scenario (even today) that would be a big breach of manners. Are we supposed to infer that he's *so* used to Data that he has a unique set of heuristic behaviors that he only employs with Data, whereas with anyone else he wouldn't have left off in a huff? Or is he this temperamental all the time? The episode doesn't address this or give us any indication that Geordi's behavior is what's being explored. I tend to suspect that the director had in mind to maximally illustrate Data's failure and having someone else get frustrated was an easy way to show that Data missed the point. But I think this was a mistake because any character arc Geordi might have had in this episode was clearly not being highlighted or scripted for; he was more a tool to get us to see Data's limitations, and then to fuel Pulaski's commentary to follow. That moment when he walks off is somewhat embarrassing for me to watch precisely because it's so arbitrary and telegraphed. The *concept* is good, as you point out, but in practice it feels inorganic to me. Maybe part of that is LeVar's interpretation of it.

    Pulaski bothers me for entirely different reasons, primarily because they brought her on as a McCoy replacement and in episodes like this one it's paper-thin. She retains the well-known belittlement of 'logic without humanity' McCoy is famous for but out of context it comes across as abrasive and pompous rather than as a friendly gibe. I mean, it basically makes her look like a racist. And even though the point of her comments is to bring up the issue later addressed of whether Data should be treated like a person, she doesn't actually come at it with any intent to negotiate on the matter or learn: he's a toaster and that's that. While this view would seem to be of limited value in terms of it establishing any relationship between her and Data (whereas McCoy's attitude was rooted in his friendship with Spock) it's even worse than that, because she isn't even really interested in learning about what he can do. He's literally the only one of his kind, to date the ultimate in cybernetics, and she looks indifferent to him. So not only is this not rooted in any kind of relationship context, but she isn't even interested in the first place; her insults are afterthoughts. In short, she comes off as a bore. McCoy's anger at Spock was often a a result of Spock's inability to express friendship is a way that a Human could appreciate; here Pulaski's attitude is just that Data's not good enough. Why should an audience enjoy any story where a character is treated like that and the offender never learns the error of their ways? Worst of all, at the very least Data's a Starfleet officer, and she doesn't even seem to afford him respect strictly on those grounds. I do like Muldaur, actually, and thought she was great in TOS, but here her direction is too narrow. Even when she's talking with Moriarty she is charmless in comparison to him, basically a heathen. Her self-importance is never justified, and indeed I feel like for much of the seasons she seems to be talking down to people. I view this as a writing issue more so than an acting problem.

    So basically yeah, those two aspects of the episode irk me.

    Something else did occur to me while reading your comments, though, which is the Data vs Moriarty issue. We're shown quite strikingly that Moriarty exceeds Data in every way: he can overcome his programming, see outside his own perspective, and learn what he is while changing what he is. He grows, and his exuberance to learn is genuine rather than an algorithm set on repeat without any modification. In contrast, Data's efforts to learn often amount to trying exactly the same thing over and over with the same useless result. However, the lesson in the episode is that, while Data is a 'failure' in this sense, we should look at what happens when someone with his mental powers *doesn't fail*: they become very dangerous. Data's saving grace and chief merit through the entire series is his child-like inability to grow more human; his failure is his charm. We even admire how steadfast and unalterable he is, totally reliable and trustworthy. But can you totally rely on someone growing and changing all the time? Their priorities change too, and your ability to guess what they'll do is limited. But not with Data: you always know what he'll do. That does make him inferior to Moriarty as an opponent, but superior as someone to rely on. Maybe the episode is trying to tell us that Pulaski and Geordi should be glad that Data can't become more than he is - if he could they might eventually be dealing with a Moriarty that wouldn't let them off so easy. Maybe the episode is saying more about rogue AI than I initially thought.

    @Peter G., well, I do think Pulaski learns the error of her ways (over the season), but I see your point on both counts.

    On the last observation, I agree. Data does actually improve over the series (and figures out Moriarty's plot in Ship in a Bottle), but it is a slow and sometimes agonizing process. My interpretation is that Data does have the ability to grow in unexpected ways, but that his ability to do "expected" functions so far outstrips his fuzzy-logic ways of growing, by orders of magnitude, that it is always far easier and even more natural to keep doing what he's doing despite its low probability of success. I tend to see Geordi's behaviour as specifically because he is so close to Data that he lets himself get frustrated with him more overtly than if it were Riker or whoever (especially since Data is his superior) but I can see what you mean about the execution.

    On your last point, I also think it is implied that Data's limitations are strongly determined by Lore. Lore's line that Soong set out to make a less perfect android is partly correct, and I have come to suspect that Data's limitations are not fundamental to AI so much as chosen by Soong to prevent him from going off the rails, which is also a mechanism of keeping Data alive, and so to some extent an attempt by him to handicap Data to benefit him. And while there is some debate on Lore's effectiveness as a villain, I don't find his megalomania and revenge (driven by persecution complex) all that surprising; he "knows" he is smarter and stronger than everyone around him, and also that they fear him, and this theme of what happens when someone's superiority is unchecked goes back to Gary Mitchell. When Data creates Lal, she seems to surpass him almost immediately, but it somehow seems to be related to what kills her. I find it all very poignant, because Data is sort of necessarily held back from his own potential because of the risks that his brother demonstrated, and his perpetual sense of a lack is probably the price paid for peaceful coexistence, at least until after some very long and difficult process.

    Modern Trek is rubbish! Back in the good old days Star Trek was way better, ya know what I mean? The new Trek is for young idiots looking for cheap thrills. Bah.

    - The computer creating Moriarty is stupid...the computer gave sentient life to a hologram? So a hologram is more "human" and "alive" than Data?
    - Data's affected accent is annoying.
    - How could they take the paper with the picture of the ship off the holodeck, and even Picard sits holding it in the meeting room.

    Thia was the episode that made sttng can't miss tv for me. I couldn't wait to get this one to the local comic store and see the reactions of my fellow trek friends. This took the idea introduced in The big Goodbye to new heighta of greatness. Much better than any of the preceding offerings. Unlike many who comment here, I am a big fan of holodek based stories

    Enjoyed the episode, but found it EXTREMELY stupid that Mortiaty had more authority than Picard.
    Picard is the captain of the ship. If he orders the program to end, then it should end. Simple as that. Period!
    But apparently, the program that LaForge designed has more authority. Pathetic writing.

    You've never had admin privileges but yet still had a computer tell you that you can't do something?

    LeVar Burton has the worst British accent ever. It’s especially bad in this one, but may be even worse in “Hollow Pursuits”.

    Among the first and probably the best of all the "holodeck gone awry" stories, introducing many tropes and plot failures that would become commonplace. Many have pointed out the failures in execution of this episode. It's never explained why it's not possible to simply cut power to the holodeck (which is very different from asking the computer to stop executing the program), nor why it is not possible to rescue Pulaski by beaming her off the holodeck. But put me in the camp of people who think that this episode holds up quite well. It has a certain charm and sophistication. As a previous commenter pointed out, in modern TV, this type of story would be exploited simply to create jeopardy and action, whereas here it is used to explore themes relating to what it means to be alive/sentient. The conversation between Picard and Moriarty at the end is really what this episode was building to. TNG / Trek in general may be a little obvious with its themes, but at least it does explore them.

    It has always struck me as odd that the hardware of a starship or starbase can clearly support the running of a sentient AI, yet the "OS" that runs on it by default is not quite a sentient AI (even though it is apparently capable of creating and running them alongside/within itself?). We've seen this illustrated repeatedly with programs like the EMH and Vic Fontaine, not to mention the Enterprise-D herself spawning an emergent intelligence in "Emergence". It makes sense that sentient AIs are a thing that have already been created (requiring vast computing power and memory storage) by the 24th Century, and normally certain safeguards are in place (although not in this episode) to prevent them from cropping up all over the place. It seems the Federation in the 24th century is a society that's on a little bit of thin ice (i.e. just barely keeping things under control) when it comes to the role and dangers of AI, which is not unrealistic, but is not something that's emphasized. That's a shame, because it's a great sci-fi element. The only thing that doesn't make sense, then, is why people like Riker were so amazed at the capabilities of the Enterprise computer in "Encounter at Farpoint", when those capabilities must necessarily represent a small fraction of the potential of the computing hardware (in "Our Man Bashir", a computer core can apparently store all of the information contained within several human brains, at the quantum level). For this reason, I also really liked the comment above (was it $G?) that the real advancement of a Soong-type android is a sentient AI running on hardware that can fit within the confines of a human-sized skull, rather than in a building-sized computer core. He/she points out that it's never played up as an advancement in miniaturization, but perhaps this is exactly why it's referred to as a *cybernetics* advancement rather than purely a computer science one. The point is not that they are sentient programs, but rather that they are sentient *mobile humanoids*.

    This episode begins my long-running frustrations with the inconsistency of just what the holodeck does. A hologram consists of light interacting with a material medium in a certain way, so as to produce the illusion of a three-dimensional image. So it's purely photons and that material medium. In the 24th century, dialogue often indicates that it's supposed to be photons interacting with (or perhaps confined by) forcefields in order to create the illusion of solid forms that have whatever appearance you want. But not only would this not make for a very convincing, ahem, "tactile/anatomical experience" in Quark's holosuites, it's also completely contradicted by what Picard says here. What they are calling "holograms" here seem to be actual matter converted into whatever form they have from energy. They are *not* projections of light. Exactly why it is necessary for projectors on the ceiling to continuously input energy in order for this matter to maintain itself (unlike with replicated matter) is not explained, but must be accepted (and is supported by Voyager's EMH being able to change his solidity at will). Hence Moriarty's statement that holodeck matter could not be converted into a more "permanent form." So holograms are more like blade-runner replicants, but unstable and at risk of dematerializing without external input? Regardless, it's very clear that some objects on the holodeck must be straight-up replicated matter, rather than "holographic" matter. It makes sense that there would be a combination of both in the program. Tea and crumpets? Replicated. Pieces of paper that nefarious villains want you to take back to your captain? Apparently also replicated. (I read above that this intent with the paper was meant to explicitly in the script, but was cut. I hope that's true, because otherwise it's a very glaring mistake to take a holodeck-matter object off the holodeck in an episode whose central plot point is that this cannot be done). Apparently the computer replicated a whole stream/pond for Wesley to fall into in "Encounter at Farpoint" as well (I say with an eye roll). Is this because a stream made of holo-matter not soak clothes convincingly? If so, this is again problematic for things like the holographic sex that I alluded to above.

    William B pointed out that Data should be capable of reproducing a much more passable British accent. Given that he is capable of reproducing *any* voice perfectly (including Jean-Luc Picard, who speaks with a British accent), this is an even bigger plot hole than it first seems.

    There was another comment above asking why Moriarty relinquished control of the ship so quickly. The objection was that sure, he was sentient, but he was still Moriarty. Why had he suddenly developed a conscience? I think this is actually addressed pretty well in the dialogue. He had grown beyond his original programming and felt that he was something more than a villain. He chose to put the lives of 1000 people over his own. Maybe this development of his own personal morality strikes you as being too quick to be plausible. If so, we can also justify this another way. Remember that he's very intelligent, and his actions can be explained as purely rational and logic-based here. He knew that his demand for existence was *futile*, because Picard did not have the power to grant it. It was clear when he asked Picard whether he did not know how to convert holodeck matter into a more permanent form, he already knew the answer. A good villain also knows when to fold. Moriarty also probably realized that the crew would have eventually gained the upper hand (unless he destroyed the ship and himself along with it). Remember that at first he didn't know whether he was *on* the Enterprise or not. He asked Pulaski that question. Once he realized he was confined to a room onboard the ship, there was nothing else for him to do. Hence his statement "I put myself in your hands, as perhaps, I always was."

    This is classic TNG, and I agree with the three-star rating i.e.
    7.5/10

    8/10

    I like Data episodes and I liked the interplay between Data, Geordi, and Dr Pulaski. I like the Holmes episodes on the Holodeck. (More than the jazz age, gangsters etc)

    But now I wonder are there more Holmes episodes? Memory says there must be and this single one cannot be the basis of my fond memory.

    I don't care about the nonsense of the Holodeck. I remember when TNG first came out, we wondered why people didn't live in the Holodeck. I am more irritated by how careless the Star Trek universe is with technology. How can computer control be so easily lost?

    I think this episode was supposed to be A Big Question one about What IS Life. But I preferred the earlier part of the episode and the three characters having some fun.

    ⭐️ ⭐️ 1/2
    I think this has to be a two and a half star episode, despite it being in the mediocre (but much better than season one) season two. It evaluated compared to the series so far up to this point it could certainly be a full three star episode but compared to the series as a whole it has too mainly strains of credulity to justify that. Of course as mentioned by others there are some contrivances one must accept in order to sustain the illusion of disbelief... for one thing of course the computer would not be programmed to so blindly obey the command that endowed Moriarty with his superior intellect and awareness. Even though the episode has a somewhat intriguing solution in that in order to have the potential to defeat data Moriarty must posses consciousness, and apparently the ability to use the computer, obviously the holodeck would be programmed (probably in its read only memory aka ROM or the futuristic equivalent) to never allow a holodeck character to employ computer commands, regardless of the wording of a command given to the computer.

    That brings us to contrivance number two, being that the computer, even if one accepts that data’s command would cause it to attempt to endow Moriarty with consciousness and sentience, is able to endow Moriarty with sentience at all, let alone so quickly and easily. Much is made of the genius of Dr. Soong in creating Data, an android who possesses sentience. I find it dubious that this remarkable feat, the endowing of an artificial intelligence with consciousness, can be so easily duplicated by a simple command to a star ship’s holodeck. Data’s miraculous positronic matrix, which is said to be respsonible for making his sentience possible, becomes apparently unnessesary after the “discovery” in this episode.

    Finally we come to the third issue, that surely there would be a better way of resolving the situation of the doctor’s abduction than having to enter the holodeck and negotiate with Moriarty. One blatant plot hole is that apparently Moriarty was able to give the computer commands using the authority of Laforge’s clearance... so then why can Laforge himself (or higher ranking members of the crew for that matter) not reverse Moriarty’s overrides that keep the program from being shut down? Even if we accept that that is not possible, it seems all the more unlikely that the only way to manually disable the holodeck is to flood it with energy that would be lethal to a human as well as the holographic projectors. Why not simply have an engineering team manually disconnect the power from the holodeck (physically slicing the wires or whatever is used if needed)? Or if we accept that they must confront Moriarty, why not bring a phaser or command the holodeck to make a gun for Data to use to shoot Moriarty with, using his superhuman android speed and accuracy?

    Oh well.... these things need not detract *too* much from the episode if one is not in a contrary mood and is willing to take it at its face value. However I do feel it is legitimate to penalize the score by half a star because of them, especially since at least some of the mentioned issues could be mitigated or even eliminated entirely by more clever writing, with more effort devoted to believability. I don’t have all the answers and it may be necessary to modify the main plot some to accomplish this but more effort in that department would be appreciated.

    Still this is a very solid early effort, equal to some of the more blandly average episodes in seasons 3-7, which given the very high equality of that era means that this is actually solid praise for this episode. It’s certainly better than your average Voyager episode, were it in that series it probably would have gotten the full three stars (even though I tend to beleive all the shows should be rated according to the same standards I am fine with the occasional small adjustment to indicate an episode is better or worse compared to its more immediate peers). Unlike some I have no problem with the beginning of the episode, I like “character bits” like that. If anything I don’t like (and don’t find quite plausible) the doctor’s overly openly degrading attitude towards Data. She is the only one who treats him like he is more a piece of hardware than a conscious entity. Also Data solves Star Trek aged mysteries every day, so why the strong lack of faith in him? Either way I mostly enjoy rich flavor of the buildup, and the performance of the actor playing a Moriarty is spot on.

    Forgot to add @Meister, to answer your question about there being more Holmes episodes you are correct, there is an episode in Season 6, called “Ship in the Bottle” I believe, where we get to revisit Data and Laforge’s Holmes hobby on the holodeck, as well as Moriarty and his desire to exist in the real world. A very good episode, I definitely recommend it (and don’t read up on it more than this before watching in order to not spoil any surprises).

    I enjoyed this episode but ugh. Too much Pulaski. She managed to insult Data at least twenty times within the first ten minutes of the ep. And she was grossly wrong about everything she said too. No, Data would not "short circuit" if he had to solve a real mystery, he does it almost every day when he's on duty. And furthermore, I'd like to point out why Pulaski being on the show longer would not have been very appealing; having her around as a constant hamper on Data's quest for humanity would have ruined every episode involving his arc. He needs friends who encourage him like Geordi or Picard, not this bitch.

    Worst part overall:
    Pulaski seems perfectly comfortable around Moriarty, the douchebag hologram who kidnapped her; (she actually seemed kinda flirty) and yet she treats Data like shit, the sentient android who is nothing but kind and polite to her, even tho he should be flippin' her off. Ridiculous.

    Also, am I the only one who noticed how Geordi sounds like he's got a cold?? Lol

    @ Lizzy
    Language. :)
    I think overcoming obstacles makes achieving a goal more rewarding. Also you cannot insult Data because he has no emotions. An insult is an attempt to make somebody feel bad which will never work with Data. And doesn't she come around in later episodes?

    Plus Moriarty is portrayed as a perfect gentleman. She probably doesn't feel threatened maybe even gets a little kick out of it. You need a good sense of humor when you are hands deep in guts half the day.

    And if your love for Data becomes to strong just click on this link: ;)
    http://de.web.img3.acsta.net/r_1280_720/pictures/16/05/23/15/17/207405.jpg

    @Booming

    Ah, we meet again. I do apologize for my potty mouth but you know how passionate I become.

    Also dude even if Data doesn't have emotions, (which I for one am on the fence about) that doesn't mean it's okay for Pulaski to just stomp all over him! And my point about Moriarty is that if she really doesn't place much value on artifical intelligence then why is she so comfortable around a hologram, rather than an actual, physical being? I just will never understand her.

    Which is why I am more than thankful they brought back Dr. Crusher. She may have been underused, but at least she was nice.

    @Lizzydatalover

    Doesn't Pulaski start off obnoxiously prejudiced against Data (which as we've seen lots of him we get angry with her for), but then as she works with Data and spends time with him she learns to appreciate him and her prejudices are softened? I think this particular plot strand is actually exploring the nature of prejudice. Pulaski was hostile to Data before she'd even met him just because he's an AI. Meeting and working with the reality of an AI her prejudices are challenged.

    And @Booming is right. Data's feelings can't be hurt and so he can't be insulted (therefore Pulaski is wasting her time trying to wind him up).

    I wonder if the writers weren't also trying to replicate the sparky nature of the Bones/Spock relationship to a certain extent. Think how baffled Bones was by Spock and his Vulcan ways and how he used to get really riled by him.

    I liked Pulaski, at least she had a personality (plus the actress playing her was a massive improvement on Gates M as well). Yes, she's abrasive but that's preferable to blandness and Bev is soooooo very very bland.

    @Artymiss

    Yes, they were trying to recreate the Bones/Spock dynamic but for me they failed miserably.

    And as I told Booming, even if Data can't be hurt on the same emotional level as we can, that certainly doesn't give Pulaski the right to disrespect him so much. And besides, I think we've seen time and again that Data sometimes can be wounded, maybe not entirely in the same way, but enough to where being looked at as a mere machine is unpleasant and not something he enjoys contemplating. I tend to defend him passionately on this issue mainly because he was a good person, and did not deserve the unfair treatment he received from Pulaski.

    That is why I very much prefer Beverly, despite being so underused in storytelling it wasn't even funny. But to me, Pulaski was the bland one, because all I got from her character was prejudice and stubbornness which I didn't enjoy watching.

    Ah, just wonderful. So well done by all involved.

    Again, just as in eps 1 and 2, we're talking about what it means to be alive. "I think, therefore I am," posits Moriarity. Picard is not so sure he agrees.

    Pulaski - I love her. Muldaur is great.

    I love how good naturedly, but directly, Pulaski challenges both Data and Geordi, and how happy and enthusiastic she is to go along, and find out about Data's abilities.

    She doesn't really care if she's proven right or wrong - like Geordi, who's disappointed in the first Holmes adventure for being too easy - she just wants it to be a real game, and she wants to be in The Game.

    The Game. You gotta be in the Game. You don't want to just exist. You want to be ALIVE.

    More talk of Death in this one, too. The mortality prohibition has been removed: the holodeck characters can kill them. Moriarity doesn't want to die, and Picard doesn't want to kill him. But how interesting it is that Picard words it that way: "I don't want to kill you." He's refused to unequivocally concede that Moriarity is truly alive, but he thinks Moriarity can die. Ah, Picard. Think that one through.

    Now, we're cooking! Go, go Season 2!

    I mostly like this one too and I agree with William B and Springy's analysis that the beginning "game" part of the episode where with the characters decided they want something challenging and engaging pairs well with Morality becoming a living being that actually challenges the Trekkian notion of lifeform. Muldaur, Spiner, and Burton are all great in this one and it probably sells itself on performance and costume alone.

    That said, I don't think episode does resolve Pulaski's challenge very well. Did Data solve the Holmes' mystery? It feels more like Picard had to solve it for everyone. And if that's so then the takeaway appears to be that Pulaski was right; Data isn't human enough to take on an original challenge yet. And - don't get me wrong - that conclusion by itself wouldn't be such a bad thing, but somehow I don't feel like that's the conclusion the writers were going for.

    Sorry for the typos, Internet Explorer is apparently *my* virtual opponent today.

    @Chrome

    Yes, Pulaski's question about Data really is just left hanging. Data figures out what there is to figure out, which isn't much . . . but things go totally off the rails once Moriarity becomes sentient.

    The challenge for Data, of solving an original "whodinit" crime, perpetrated by a truly clever criminal, never materializes.

    The challenge is not figuring out a mystery, but getting Moriarity to stand down. That's a job for Picard.

    I wondered if that little murder mystery, with the killer-wife, was meant to show us that "see, Data can figure out a Holmesian mystery without having read the story before - he is capable of intuition and original thought" but if so, it didn't really do the job.

    It's not a big problem that the ep doesn't answer the question, IMO (though Pulaski should have made some mention of it). "How human is Data?" is an ongoing question.

    Interestingly, there's a bit in the shooting script which reveals that Data had figured out that Moriarty could leave the holodeck and had outsmarted him as a result. This was a good cut, because while it resolves Data's arc for the episode it just creates too many other problems, technically, thematically, and ethically.

    Interestingly I think the sequel to this episode, Ship in a Bottle, genuinely does resolve the dangling Data thread, which is great.

    @ Springy,

    I thought a bit just now about this matter of the dangling plot thread of whether Data was capable of coming up with something truly original. But your take on it seems to be that this thread gets dropped when Moriarty becomes sentient. But wait a minute! Does that not actually continue the thread but in an unexpected way? After all, what is Moriarty but an original creation somehow concocted by the ship's computer when Geordi gives an arbitrarily vague instruction. That is certainly pretty creative on the computer's part to follow the instruction in this precise way and to somehow imbue the character not only with the technical tools needed to beat Data, but also with a personality that goes beyond a lame story cliche.

    So when the story gets Shanghaied by Moriarty, it seems that the computer itself has answered Pulaski's challenge about whether an artificial intelligence can be creative. And we know that the computer has proven its case by the very fact that this detail is ignore! Pulaski, you may note, is perfectly happy to accept that Moriarty is a person, due the full extent of courtesy and conversation that she never even thought to offer Data. He passes her intellectual Turing test so well that she doesn't even think for a moment that she needs him to try to pass it. And I doubt it's just because he's more human-looking than Data, because I doubt she'd have offered that same courtesy to some other fictional character in the story. Somehow he actively commanded her respect and she gave it without realizing she had done it.

    I can't be 100% sure the writers intended this to exactly be 'the answer' to her question but it certainly is one. The thing she wanted to have proven for her is exactly what she got, and the result was totally off the rails compared to what she wanted. She thought it would be a cute and controlled experiment, where she'd probably be proven right but even if not she could observe it from her cushy vantage point as observer. The fact that she was in fact dragged into danger and over her head is what *actually* happens when you do an experiment of this sort, seeing what computers are really capable of. Picard had to come in and negotiate precisely because the entity they created had a mind of its own and couldn't be controlled in a careful environment - just like real computer AI will be one day.

    @Peter G

    interesting thoughts. I hadn't given much thought to Pulaski interactions with Moriarity and how they might affect her perceptions. But certainly that is significant.

    To a certain extent Pulaski's challenge was about AI in general, and to that extent, it was answered with a resounding YES I CAN, by Moriarity (or the computer through Moriarity, as you astutely point out).

    But to the extent that Pulaski's challenge was about Data specifically (and it definitely did have such a component), it was not answered. That's OK - How human is Data? and How is Data human?are two questions we'll be answering throughout the run.

    I agree with the previous two notes, I find Polaskis constant treatment of Data rather confusing, especially when she treats Moriarty with more civility, understanding and even I suspect a sense of awe. At no point does she make reference to Moriarty being just a set of fixed lines of programming accessing a database.

    But unfortunately this whole premise is brushed aside as if writers dropped a story in favour of another. It would have been nice to even have a mention of her points of view on Moriarty and Data in comparison even if I'd ultimately disagreed with her conclusions, just more fuel for debate.

    Continuing in regards to the paper being taken off the holodeck as it has been mentioned by a few. Potentially an oversight by writers but at the same time the holodeck is described as not just an interactive theater of forcefields and projections but also replicated matter as well.

    Usually those replicated items can be expected to be food and drink, potentially the containers for such. Bodies of water also, we've seen people come off the holodeck soaking wet. I personally believe that some structures could and would also be replicated.

    The computer may normally recognise its replicated matter leaving the holodeck and dematerialize it along with projections. On occasions the computer may pay attention to body language, such as grasping an item, that they want to keep a hold of it. And yet other situations food that you've eaten, having been kissed and get lipstick smeared on your face, or ever pervasive water soaked into your clothes. It might be too difficult, dangerous, or just plain inconvenient for it to be removed and so doesn't bother.

    By now you've probably realised I'm arguing that the paper could be one of those replicated items that they wanted to keep, unfortunate that it was not expressly stated in the episode. Living bodies such as Moriarty being too complex for full on replication and so limited to force fields and projections, as we've seen in other episodes when characters try to exit.

    @Richard it is sensible to assume that when playing a role in a holodeck fantasy Pulaski isn't likely to go around pointing out to holodeck characters that they aren't real anymore than she'd feel the need to lecture the ship's computer. In the case of Moriarty though, she had an added incentive not to break character as it seemed he was feeding on information and thus it would be dangerous to indulge him, regardless of her feelings concerning his sentience or lack thereof.

    In Data's case, Pulaski is being told this machine is a man and she doesn't quite buy it (at first).

    I always thought the paper leaving the holodeck was curious too, but my head canon meshes well with yours - it seems likely that for very simple objects like a scrap of paper the computer might rely on straight up replication rather than rely on holography to reproduce sensation on one's hand of holding the object.

    This has always been one of the most enjoyable episodes in Star Trek, any variety of Star Trek. I rather tend to enjoy holodeck episodes, and I find it strange that so many people appear to dislike them.

    Pretty well all the complaints people here have made about this episode, over the years, I¡m inclined to se3 as points in its favour. Geordi's Victory? A nod in the acknowledgement of the roots of Picard's performance in the series, which was essentially of being a Naval Captain of that period, Hornblower, or indeed Nelson.
    The accents and manner of speaking of Data and Geordi as Holmes and Sherlock? Very much in keeping with classic Sherlock Holmes Basil Rathbone and Nigel Bruce movies. They weren't trying for authentic "British" accents. (There isn't "a" British accent anyway.)

    And as for Pulaski, I'm enjoying seeing how the relationship develops. It seemed to me that she was intentionally challenging him to exceed his limitations. In her way, she was encouraging him just as much as Geordi was, but in a different way. One thing to remember is that it's been indicated that Pulaski is in some ways wary of technology, as demonstrated by her strong resistance to have anything to do with allowing herself to be beamed around the place. It's reasonable enough that on meeting the only android that has ever been constructed (leaving aside Lore) she should be strongly sceptical about it being in any real sense sentient. In the course of the season ( and indeed in this episode) she is coming to revise that view.

    I agree with the suggestion that we should see some of Data's awkwardness and limitations as intentionally put into him by his creator, Soong. They are in a way an aspect of the way that Asimov's three laws of robotics were incorporated into him in a way they were not for Lore, who has no inhibitions about harming humans. Data could not, and his awkwardness and naivety are part of that, it protects human's against feeling inferior.

    I totally disagree @Gerontius, she wasn't encouraging Data she was trying to prove her prejudiced point that Data was merely a thing and computer and not a sentient being.

    It’s insane that the holodeck got through testing enough for all the shenanigans that happen throughout Trek. And certainly with such a casual command from Geordi, like “do you want to create a supervillain controlling the whole ship?” and no “Are You Sure?”

    Pulaski’s dreadful Data hate is on full display here, but it is at least consistent characterization, and shows some small character growth.

    But aside from the many difficulties, it plays out pretty well with Trek sensibilities. Day-Lewis is spectacular, and Moriarty becoming self-aware and transcending his caricature origin is very compelling. Maybe not completely convincing, but good stuff. This is Roddenberry‘s vision of the future in steroids. It’s pretty good sci-fi even outside its Trek. Picard’s talk with Moriarty about technology is rather touching.

    Watching it now, it’s reasonable to wonder if Moriarty is playing a deeper game. He’s got the intelligence to basically understand his existence, and even perhaps to be planning his future moves.

    Also right now, some nice details I never noticed before. Moriarty’’s steampunk device to shake/control the Enterprise has a small integrated LCARS display.

    I love that we get Pulaski coming right at Data. I like the conflict and I like how it makes us feel given that most of us probably consider Data in our top 2-3 favorite characters in the show.

    I liked pretty much everything about this episode, it's one of the holodeck episodes that clicks for me. Data solving every mystery because he had them stored in his memory is a great bit because we see Data totally missing the point and learning about it, somewhat.

    I like the exchange between Picard and Moriarity and that's all fun and interesting but I kept finding myself wishing that instead of there being a threat to the ship or anything that the stakes had remained low and we would have found out in the end that Pulaski was working with Moriarity as part of the game as a way to prove Data couldn't adapt and interpret new info and challenges. Ah well.

    The actor that plays Moriarty is an excellent actor. Well spoken, superb elocution, and delivery. No doubt had theatre training. Very good actor.

    On the face of it, this is a clever, fun, but rather silly holodeck excursion to be filed away alongside TOS 'Piece Of The Action' and TNG 'The Big Goodbye'.

    It gets deeper though, quite quickly. When the computer - acting on Geordi's careless instruction to "create a mystery that could defeat DATA" (when he should have said "Holmes") - creates a Moriarty that apparently is self-aware and knows about the Enterprise and its personnel, one could at first put this sophisticated creation down to the inherent 'stupidity' of a computer carrying out an instruction. But a computer that in doing so, overrides the 'mortality' settings, would appear to have insufficient safeguards built into it, especially when allowing a holodeck character to interfere with the controls of the ship, and even more unlikely, allows that same holographic character to invoke the 'arch' that can permit crewmen within the holodeck to talk to the computer and amend the program.

    If that was all, it would simply be an episode that has a major flaw in it, with respect to the nature of the holodeck and the unsophisticated computer controls that can decide exactly how far the program can (or should) interface with the running of the ship.

    However, then we get a very philosophical conversation between Picard and 'Moriarty' that touches on consciousness, what defines 'being alive', the continuity of self-awareness, life and death. This I believe is the meat of the episode , what sets it apart from being a historical romp like so many of its predecessors. My main gripe with this challenging conversation is when Picard refers to Data's 'consciousness'. This is an area where science struggles to come up with coherent theories, in that the cause of consciousness cannot be defined or proved, at least, not up to now. Two of the main theories are 1. that consciousness is the result of complexity, i.e. the bigger the brain, the more likely that consciousness will be a side effect; this theory has a severe 'chicken & egg' drawback: if consciousness is a by-product of a large brain, then it calls into question why a large brain evolved in the first place. I myself - like Dr Pulaski - reject this theory. 2. Another theory is that consciousness is a necessary concomitant of life (whatever that is) and therefore all living things have a degree of consciousness, from extremely little to the ultimate, i.e. self-awareness. The latter category obviously includes human beings, but probably most primates and quite possibly whales and dolphins too.

    I do like Data as a character, but like Dr Pulaski (who has noticeably mellowed towards him in this episode ) I cannot see him as more than a machine, and therefore - not being 'alive' in any proper definition of the word - cannot have consciousness or self-awareness. Does that mean I don't believe that by the 24th Century we cannot have created machines with simulated self-awareness? No, judging by the 21st Century Siri and Alexa, we have already taken the first steps down this road. It's quite possible that those AI characters would already pass the TUring Test with flying colours, and there's another 300 years to go before we get to "Data's era".

    Is Data therefore a "person"? Judged by the standards of today, we can see him as perhaps the ultimate child - blessed with abilities far beyond our own, but failing at the level of intuition, humour, real affection, and ultimately, love.

    Because it's so thought provoking (NB I've not yet read any other comments) I'm happy to give this 3 stars.

    @Tidd

    My attitude about AI is probably much like yours. The Data character, portrayed as a "person," makes for interesting storytelling, and I am willing to suspend disbelief long enough to enjoy the stories. But that doesn't mean that I believe the creation of such a thing is anywhere near as easy as the writers seem to think.

    In college, just before TNG came out, I had a double major that happened to be good preparation for reflecting on such matters: Computer Programming and Theology.

    The question asked in "Measure of a Man" is one not for computer programmers, but for philosophers and theologians: Does Data have a soul?

    If he does, then sometime between now and the 24th century, some quantum leap must happen in technology, not just continuing progress along the current trajectory. Computers as we now know them not only are not conscious, but CANNOT be conscious, and never will be. Computers AS WE NOW KNOW THEM.

    I have had conversations about "Artificial Intelligence" with a relative who is a neuroscientist, in which I said that many people think it means "Artificial Consciousness," the thing we actually have no idea how to create. I am willing to say that a computer is intelligent in an analogous way to an abacus, just farther along that trajectory. But even the most fancy abacus that could ever be built has no consciousness.

    I also told her I believe the Turing Test has always been misunderstood, and we have already reached a point where it is obsolete. I think at a philosophical level (which Turing himself may not have understood), what it really means is that because the only being we can KNOW is conscious is ourselves, if an artificial construction SEEMS conscious to the same extent as our fellow human beings, then we MAY AS WELL act as if it is, as we act as if the people around us are, even though we can never fully know they are. That's very different from saying that if we can't figure out a way to tell that it ISN'T conscious, then it really IS, which is how most people misunderstand the Turing Test.

    As you say, well-designed chatbots can already "pass" the Turing Test pretty convincingly. It's not even that hard. The human brain is designed to recognize a conscious conversation partner, so if the computer comes close enough, we fill in any gaps ourselves, and perceive more than is there (kind of like the way we are hard-wired to see faces, even in random patterns). Artificial Intelligence is good enough to create the illusion.

    Artificial consciousness, on the other hand? That we still have no idea where to begin.

    "Artificial consciousness, on the other hand? That we still have no idea where to begin."

    Well that was the really funny thing about Data. In the 80s he seemed incredibly advanced and the premise of this magical "positronic brain" technology was incredible.

    Then in the 90s we kind of chuckled at the super advanced android brain that couldn't use a contraction and lost to Deanna Troi at chess.

    But now in the 2020s after seeing all the amazing progress in AI and the equally amazing failures I think we come full circle and realize that Data is incredible and no, we are nowhere near creating anything like him even on a superficial level, to say nothing of consciousness.

    In fact I kind of like the fact that he gets caught in a Chinese Finger Trap and can't whistle. That seems right to me. Teaching an AI to actually whistle (not play a recording but actually manipulate human-like lips to make the sound) seems like something that no AI would be able to do today. Making an AI that could independently teach itself these skills? You're talking Skynet level stuff at this point.

    Skynet wouldn't have to be particularly complex to do what it did in The Terminator. It just needs calculation and targeting algorithms. It's basically a military hunt and kill program, with a side project of robotics advancement. The TV series hinted that maybe the machines had advanced in a strange direction, but as far as the films go I can't see any evidence that Skynet is even close to as advanced as Data. It's more like what Data does in Brothers to lock down the Enterprise: very efficient, but not particularly sapient. In fact that was Data at his least aware.

    What's cool about this episode is that it suggests that *maybe* a self-aware AI can be created by accident. Not on purpose, mind you, because we don't even know what that means, but due to a series of 'malfunctions' if you want to call it that, a holographic program could be imbued with just the right combination of programming and self-evolving needs. Data can learn, so in order to beat Data Moriarty would have to be able to learn; and so it's a rabbit hole of what the powerful Enterprise computer calculated would be required in order for Moriarty to actually be able to beat Data (which he did). And of course this brings us to the Enterprise computer itself, which I think has come up before in discussions. Maybe even the Enterprise computer has accidentally integrated certain tech that, if itself subject to further accidents, can produce a thing we don't understand. In that sense robotics becomes likened to genetics, where current theory supposes that complex life can sort of just accidentally come together through unplanned processes that end up verging toward complexity.

    But I'm with Trish on the basic issue of consciousness. Until we can even define what conscious is or means we're simply nowhere in asking whether we can create an AI that thinks like us. What are we like?

    Side question: Trish, what sort of theology did you study?

    What is consciousness? What is a "soul"? If I'm not mistaken then there is quite the lively debate going on in neuroscientific circles about how much free will Humans actually have. One could certainly make the argument that we are incapable of understanding our own consciousness because we can never study our consciousness from the outside in an objective way. Data apparently has some super evolved form of artificial brain and it is immaterial if he has consciousness or not because we are incapable of deciding which it is. As long as he appears to have one (to our consciousness) then that means that he has one.

    @Peter G.

    Catholic. (Is that the question you were you asking?)

    My BA is from the University of Notre Dame. After that, I went to a small ministry school (again Catholic) for a Master of Divinity (the equivalent of seminary).

    @Booming

    It sounds as if you are making exactly the leap I spoke against above.

    "[I]t is immaterial if he has consciousness or not because we are incapable of deciding which it is." That I could go with, as another way of phrasing what I described as the real meaning of the Turing test: If we cannot know if ANYONE or ANYTHING is conscious except for ourselves, then, Turing would posit, we may as well treat the other "as if" they were, as long as they exhibit all the signs that we accept for everyone else.

    But that does not mean that this "as if" is necessarily true. It just means we cannot know for sure what the truth is in this matter, at least not purely from observation of the subject's behavior. The conclusion "That means he has a consciousness" is a jump too far, at least for me. I could perhaps follow you as far as "That means we may as well treat him as if he has consciousness," but not farther.

    As it happens, in current technology and in the foreseeable trajectory of the development of current technology, we have more information than just observing the AI's behavior. Our consciousness is actually capable of looking at it from the outside, in a way we cannot for our own. We (at least those of us who have coded them) actually do know how it works, and we know it is a simulation.

    That's why I say a real-life Commander Data would have to spring from something very different from just ongoing progress in current technology.

    @ Trish,

    Thanks, yeah, that was the question I was asking.

    On the subject of artificial consciousness, Frank Herbert wrote a book called Destination: Void in which he not only asks what would happen if we developed a real conscious AI (hint: it's not what we expect), but alongside that asks a question not often asked: how do we even know whether we are conscious? We feel like we are and so assume we are, which ironically is no too far off from your suggestion that we may as well treat Data as if he's conscious. The same may well go for us! Herbert suggests, additionally, that consciousness may have degrees, sort of like the difference between being asleep, half asleep, or fully awake, and part of the book's theology/consciousness question is about what it would imply to be fully conscious, if such a thing is even possible or definable.

    The issue of free will becomes very much a theological one (or at minimum a metaphysical axiom) when considering that the fact that we can't properly define consciousness seems to call into question why exactly we're so sure we're different from Data. Maybe we are, but what is the categorical reason why? Hard to assess him before we can even assess us. The problem appears to be even worse than how Picard poses it to Maddox in Measure of a Man. Not only can Maddox not prove Data isn't sentient, but he also can't prove we are, in the sense he means (i.e. more than the sum of our parts). Picard's argument ends up being far more powerful that just calling into question what Data is or isn't, but more or less torpedoes any sort of casual claim about the sort of sentient life Federation law is supposed to protect in the first place. It deconstructs completely incoherent or unclear definitions of consciousness and sentience and leaves us basically knowing nothing about anything on the subject other than what we simply assert as an axiom.

    @Trish

    I agree with everything you said, especially about the Turing Test. Indeed, I have conceptualised a further test that MIGHT be applied to determine if a being is alive or is an AI. All living things have an urge, an instinct, to survive and will fight to avoid extinction. A suitable test might comprise a proposal to end the being’s existence (death in other words) - if a creature is alive, it will do ANYTHING to avoid death, whereas Data, for example, would quite happily switch himself off, permanently if necessary.
    However, I agree with your point that if something “seems” to be alive and intelligent, then we as humans might as well interact “as if” they were really alive, as the crew does with Data.
    As for the advancement of future technology, I agree we can’t know - digital technology would be utterly fantastic a few centuries ago - but something like Data would require neural net technology that we are only just beginning to conceive might be possible. After all, there are many more potential connections in one human
    brain than there are atoms in the universe! so it would be quite a feat. But the deeper question is: even if such a creation were possible, would it be conscious?

    @Booming

    I agree that if something “seems” to be conscious then we might as well behave as if it were, at least up to the point where it stops seeming so.
    I notice you raise the “free will” argument. Have you read the experiments by Prof Hubbard? He seems to have rejected the notion of free will based on observing that his subjects pressed a button fractionally before their conscious minds had registered a choice to be made, ergo they had no free choice in the matter. But nowhere in his thesis does he seem to have acknowledged that different parts of the mind operate at different speeds. For example, our will may well come into play before our everyday “monkey” mind becomes aware of a choice made by a deeper level of the mind, so for me, he hasn’t disproved free will. (Though I’m not a neurologist!)

    @Peter G

    Yes, the issue of defining exactly what consciousness is, is the crux of the matter (I personally would say it’s a metaphysical or philosophical question than a religious one, but that’s only because I don’t believe that there is a God). I agree that we can’t prove that we are conscious, but on an instinctive level we seem to know that we are. Denying it leads us into the rather grey areas of solipsism and existentialism - interesting conceptually but dangerous to live by, perhaps? Dr Jonathan Miller - an atheist but not militantly so like Dawkins - accepted that consciousness was “the great unexplained”.

    Bottom line? I like the character of Data but I hold the issue of his “consciousness “ at arms length!

    @Trish
    Sure, in the end we do not know but my point is we do not know what consciousness is (hard problem of consciousness). Apart from yourself you cannot be sure that anybody else has consciousness. I don't want to lead you all down a post positivist rabbit hole but in the case of consciousness it is the only way to see the problem. So as to have any kind of measure for consciousness I fall back on empiricism. If something says it has consciousness and exhibits all the signs then it has consciousness. If you have a method to falsify that hypothesis, give the nobel prize committee a call. ;)

    Peter actually raises a good point. If we during a coma or while being asleep are unconscious does that mean that during those periods we aren't conscious beings. Would it not be accurate to call us partially conscious beings. In a sense Data is never unconscious ergo he could actually be called more conscious.

    @Tidd
    " if a creature is alive, it will do ANYTHING to avoid death,"
    There are beings, including Humans, that are willing to die for something else.

    I have heard of these experiments and debates, though I haven't looked into it for a while. To be honest I always maintain that if somebody tells me she/he knows that we have no free will then I can safely ignore that person. :)

    @ Booming,

    "If we during a coma or while being asleep are unconscious does that mean that during those periods we aren't conscious beings"

    Actually I wasn't talking about how conscious we are on average, between sleeping and waking. I meant that akin to how our minds are when asleep versus awake, in our waking state we may be the equivalent of asleep or half-awake compared to what consciousness actually is (or at least Frank Herbert asked that question).

    An interesting discussion all around!

    I think of both consciousness and free will as having degrees, not an absolutely binary "you have it or you don't" quality for either. In a sense, the legal concept of an insanity defense is an assumption of a high degree of free will as the norm based on which a person merits punishment for their crimes, and an acknowledgment that in some cases, free will is impaired to such a degree that a person cannot be justly punished for their illegal acts. (In a religious context, this reasoning takes the form of distinguishing between an act being objectively wrong and being subjectively culpable. A person must willingly choose to do what they know is wrong to "sin.")

    I would posit that the only fully conscious Being with fully free will is God. The rest of us have only limited degrees of those qualities.

    Even if one does not believe that God exists, it is still possible to view the ideal of total consciousness, like total freedom of will, as sort of an asymptotic concept, that is, one that at best a being can approach, but never quite attain.

    Tangentially, this reminds me of one of my pet peeves from another episode, in Phil Farrand's term a "nit" I always pick whenever I watch "Where No One Has Gone Before": Kosinski at one point uses a word that is clearly, from context, meant to be "asymptotically" which the actor pronounces (and the director apparently didn't correct) "asymptomatically," a completely different word that doesn't fit the context at all. Maybe they were deliberately trying to make Kosinski sound ignorant, but it gives the impression that the actor and director were ignorant.

    Kosinski knows what he is saying. He did the Kessel Run in 12 Parsecs. ;)

    @Trish

    Yes I'm happy to accept the notion of degree in this, even without believing in God. For example, though we have free will, do we exercise it all the time? I'd argue that "no, we don't - we are intrinsically often lazy or unadventurous or seek security / comfort. Therefore, we spend an awful lot of our time acting from learned habit, simply because it's 'safe' or 'easy' or 'doesn't require effort' or 'brings pleasure as it's done before'". The same with consciousness - that can vary from very low level (e.g. in a coma) to very high (certain Brahmins or scientists / artists at the peak of their game, etc etc).

    @Peter G

    The same as I say to Trish - yes, the levels of consciousness seem to vary considerably, not only within one person , but between different people as to their "average level".

    @Booming

    Yes, there are people willing to sacrifice their own lives for the sake of others, but it's nearly always so that others might live? "The needs of the many......"

    I think this episode is one of the season two standouts that showed audiences the series was getting better. The movie-like cinematography was beautifully filmed in a way that sets it apart from other episodes including the season 6 followup Ship in a Bottle.

    Great acting with touches of humor, good story and nice Sherlock Holmes references throughout. Love how Moriarty became something greater than the character he once was and after shedding his evil ways, simply decides to relinquish control of the ship. It certainly is a calmer episode where the day was won through conversation instead of phaser blasts. This one just grew on me. Wish they could have kept the interesting lighting for some of the sets in later seasons.

    This was a 4 star episode for me. Intriguing story, no idea how it would end, great sets, lighting, and music. This episode is where the crew of the enterprise really start to act more comfortable with each other, including the friendly challenge between the doctor and Data/ Geordi. The Data/Geordi friendship is taking off. Captain Picard is showing more kindness. And Daniel Davis as Moriarty was superb! That character stayed with me as both the holodeck shut down and the episode ended.

    Interesting fact about Marina Sirtis:

    She actually was in an episode of Sherlock Holmes before joining TNG.

    https://en.wikipedia.org/wiki/Sherlock_Holmes_(1984_TV_series)

    "She actually was in an episode of Sherlock Holmes before joining TNG."

    Yeah, I was watching that episode with my wife a few months back, and I pointed at the screen and said "Um...could that be Marina Sirtis?" She looked kind of different without the makeup and costume, much more the Greek woman than you'd expect (which she is). Actually she kinds of looks really different even in S1 than she does in the rest of TNG. So imagine that difference and multiply it by 10...you might just miss who she is if you weren't paying attention.

    I like the use of Pulaski here, since rather than having a go at Data for no reason as in the prior episode, here she functions as a goad that actually gets him to do something he wouldn't have done otherwise (and thus trigger the main thrust of the narrative).

    A couple of questions I had after watching. First, why couldn't they have simply beamed Pulaski off the holodeck? Second and nothing to do with the main plot really, but why did they have to stop the Enterprise and wait at that particular location for the other ship to arrive? Why did they have to stop at all instead of just meeting each other? I did like the episode, but they really needed a physical off switch for the holodeck somewhere. Anything that malfunctions that often should have never made it onto a starship.

    It's a recurring problem in Holodeck episodes that the transporter just mysteriously isn't an option. It's even more egregious in "A Fistful of Datas," where the communicators stop working for no obvious reason and surely SOMEONE would notice two missing senior offers yet for some reason nobody does.

    My head canon on this is that the comm badges auto interface with the ship's internal communication system and wherever you are on the ship, you are automatically hooked into the nearest node. So if you are on the holodeck you have to use the holodeck's communication node, even if you are using your comm badge. It is sort of like using your cell phone on wifi only and so you are at the mercy of whatever wifi network you are in proximity to.

    Of course you can always disconnect from wifi on your phone and just use LTE or whatnot. So it would be sensible if the comm badges could do the same. I mean we know they are independently powerful enough to contact a ship in frigging orbit and maybe further.

    But ya, it's all pretty stupid. They can beam a guy up from inside a cave 5 miles below a planet's crust but not from a malfunctioning holodeck. And the comm badges don't work.

    I should add mind you that in this particular episode it isn't a random malfunction in the holodeck but the Moriarty character who has access to the controls. Presumably he set up a dampening field or something.

    One thing about the "terrible British accents" that people complain about - Geordi and Data are going in there to have fun. Geordi, certainly, barely cares about the required accent at all and probably wouldn't know it. Data of course could reproduce it perfectly, but although he doesn't really understand fun, I think he does know by now that reproducing something perfectly is not necessarily funny. So he overdoes it, assuming that will work. He also might understand that by producing a perfect accent, when Geordi is unable to, he would be showup Geordi up and possibly spoiling the fun for him.

    TL/DR: They don't really care about the accents. Nobody is going to be criticizing them. (Except here.)

    Re: Brent Spiner's voice. He's doing The Firesign Theatre's Holmes parody voice to a tee.

    Now that I watch it again I realize that this episode does something quite strange: it starts off with nostalgia about the past, prior to starships and digital computer systems, jumps to an adventure in the ultimate digitial system, which then turns into a challenge by Pulaski regarding the limits of Data's digital system, and finally moves into questions about consciousness about a digital person. What's strange is that the questions the episode seems to be asking aren't ever actually asked: why do we romanticize the way things were (sailing ship model, Holmes setting) while also clinging to the newest technologies in everything we do? And maybe it's also asking whether there's even room for romance anymore once everything ever can be catalogued and instantly recalled by computers. Does Data's encyclopedic knowledge of the Holmes stories mean there's no room for him to 'enjoy them'? That he can merely repeat them by rote, as Pulaski suggested?

    The episode's main thrust seems to be the challenge of whether Data can be creative, and perhaps by corollary whether there is any point in engaging with an AI as if it was a person. No creativity would seem to imply that Data is just a sophisticated toaster. But as soon as Pulaski is kidnapped this examination is halted and they are dealing with Moriarty's situation. But although the discussion about Data ends, the argument continues: I never thought before to take note that Pulaski speaks to Moriarty with respect, normally, as if he was a real person. And it doesn't look as though she's just doing so to humor Geordi and Data. I think it's his obvious ability to reason, to speak as a real human would, and even his notions of feeding her and being a charming host, that lull her automatically into taking him seriously. Whereas with Data, because he looks and acts differently from normal people, she requires special proof that she should speak to him as to a normal person. So her actions seem to answer her own challenge: she definitely does consider an AI as able to be a person, so long as it can pass a sort of behavioral Turing test. She may not be conscious of it, but she does behave this way. And putting aside her own bias and speaking more broadly, if Moriarty can be even conceivably thought of as creative and sentient, then obviously so can Data who not only operates using a digital neural network but also has a body of his own. And I really don't think the episode is trying to make us see Moriarty as just a trick or a complex holodeck character. I'm pretty sure we are meant to understand that Geordi's command really did something new, something that was not as of yet understandable, and that had to be taken seriously. In light of how dangerous (literally, and morally) it could be to be able to create sentient beings with a simply computer command, I can see why the episode would start off with nostalgia for simpler times where such things could never happen.

    To people complaining about the teaser: Why does EVERYTHING have to be relevant to the plot? There is such thing as character building, world building and pacing, something SORELY lacking in not only TV but movies as well.

    It can be said that this teaser and the Worf/Riker holodeck teaser from WSHL ARE relevant to the plot: The Worf bit sets up Worf's more primal/Klingon reaction to things, which resurfaces later in the episode. Geordi and his fascinations come back into play rather quickly - he's there for the fantasy and imagination, NOT "skip to the ending as quick as possible"(which reminds me of today's gamers who skip all cutcenes in RPGs and the end up lost and frustrated later on)

    Submit a comment

    ◄ Season Index