Comment Stream

Search and bookmark options Close
Search for:
Search by:

Total Found: 32,766 (Showing 1-25)

Next ►Page 1 of 1,311
Set Bookmark
Jonesy
Mon, Jun 27, 2016, 12:12am (UTC -5)
Re: DS9 S7: The Changing Face of Evil

Comparing the picture of Starfleet HQ blown to bits with the map of the SF Bay area overlaid with casualties Weyoun and Damar are viewing moments later, I think I found a mistake.
If Starfleet HQ is located in close proximity to the north side of the Golden Gate Bridge as the image suggests, it would be in Sausalito. But according to the Dominion, the vast, vast majority of the damage was in Oakland and Alameda, ie, East Bay. There are zero casualties on the map anywhere near the presumed location.
Voyage Home seems to corroborate the north bay location - after all, Chekov has apparantly never heard of Alameda - so what gives?
Set Bookmark
William B
Sun, Jun 26, 2016, 11:33pm (UTC -5)
Re: TNG S2: The Measure of a Man

I agree with Peter G.'s last comment. I would tend to say that I'd tend to view Data and the EMH as probably sentient, rather than probably not sentient, because I think that a system sufficiently sophisticated to simulate "human-level" (for lack of a better term) sentience may have developed sentience as a consequence of that process. Either way though the evidence mostly suggests to me that if one is sentient, they probably both are. If Data's brain is different from the code which runs the EMH, this is mostly not emphasized by TNG/Voyager. (I also agree that the Doctor's seeming to be less limited than Data is maybe an artifact of the Voyager writers not putting as much effort in. It does to some degree support the idea put forward by Lore that Data was deliberately built with limitations so as to prevent him from upsetting the locals too much -- the Doctor veers more quickly and readily toward narcissism than Data does, perhaps because Data is acutely aware of his limitations.)

If I had to describe an overall arc in TNG and Voyager, it would be that TNG introduces the possibility, via Moriarty and "Emergence," that sentience can be developed within the computer, but generally Moriarty is treated as a fluke which is too difficult to deal with. I actually think that they don't exactly conclude Moriarty isn't a life form so much as try to respect his one apparent wish -- to be able to get off the holodeck -- and then put that on the backburner presumably handing the problem off to the Federation's best experts; why the holo-emitter takes until the 29th century to be built I do not know, but that's the way it is. In "Ship in the Bottle," they let Moriarty live out his life in a simulation as a somewhat generous solution to the fact that he was holding their ship hostage to achieving his, again, apparently impossible request (to leave the holodeck). In any case, in Voyager the EMH is slowly granted rights within the crew, and finally the crew mostly seem to view him as a person, and by season seven the question of whether the rights granted to the EMH within the specifics of Voyager's isolated system should be expanded outward. The bias toward hardware over software -- Data and the Exocomps vs. holographic beings -- seems to me to be something that the TNG-Voyager narrative implies is not really based on a fundamental difference between Data and the EMH, but a difference in the biases of the physical life forms, who can more readily accept another *corporeal* conscious being, even if mechanical, as being sentient, rather than a projection whose "consciousness" is located in the computer. I think that narrative implies that the mighty Federation still has a long way to go before coming to a fair ethics of artificial beings, which to me seems fine -- in human history, rights were expanded in a scattershot manner in many cases.

I do think too that the relative ease of Data's being granted rights has a lot to do with what this episode is partly about -- precedent. By indicating the dangers of denying Data rights if he is actually sentient, Picard avoids the Federation becoming complicit in the exploitation of an entire sentient race of Datas, or, if Data is not sentient, the Federation loses out on an army of high-quality androids. Either way, once the decision is made, while it may be tempting to overturn the decision if Data becomes dangerous (and it is implied in The Offspring and Clues that the narrowness of Louvois' ruling means that Data might be denied right to "procreate" or might be disassembled if he refuses orders), if he doesn't it is in the Federation's interests to maintain their own ethical narrative. Because a whole slew of EMHs and other presumably similarly advanced holographic programs were developed by the Federation, granting that they are sentient "now" (i.e., by the time Voyager is on) would mean admitting to complicity in mass exploitation which cannot be undone, only stopped. In miniature, we see this in Latent Image, where Janeway et al.'s complicity in wiping the Doctor's memory is part of what makes it especially difficult for them to change their minds about the Doctor's rights.

I do think that while Voyager seems to be pushing that sentience is possible in holograms, it still does not generally discuss whether the ship's computer *itself* could be sentient life, which is a big question. I guess Alice suggests that a ship's computer could be alive if it houses a demon ghost thing. It might also be that a ship's computer is so far from any life form that is classified as a life form that it is unknown how to evaluate what its "wants" would be, or what an ethical treatment of it would even look like. The same would probably actually apply to some forms of "new life" discovered which would not have "wants and needs" in a way that would be recognizable to humanoids, though I can't think of such examples in Trek.
Set Bookmark
Peter G.
Sun, Jun 26, 2016, 10:36pm (UTC -5)
Re: TNG S2: The Measure of a Man

@ Andy's Friend

I'm just not sure how you come to the conclusion definitively that Data's 'consciousness' has emergent properties just like a Human consciousness does, and that this is due to his physical brain. There are several objections to stating this as a fact, although I grant it is certainly plausible. I'll just name the objections numerally, not having a better method.

1) I don't see how you can define Data's consciousness as having similar properties to that of Humans since you at no point state what the qualities are of Human consciousness. What does it even mean to say they are conscious, in your paradigm, other than to say they say they feel they are conscious? Is there a specific and definitive physical characteristic that can be pinpointed? Because if, as you say, that quality is some "ineffable" something then we're left in the dust in terms of demonstrating what does or does not possess this quality. We could discuss whether a simulation creates the appearance of it, but that aesthetic comparison is the best we can do. As far as I can tell this is William B's main argument in favor of the episode having contributed something significant to the discussion.

2) As William B mentioned, you state quite certainly that the Enterprise computer is distinctly different from Data's "brain", and that this mechanical difference is why Data can have consciousness and the computer can't. What is that difference? If as you, yourself, say we don't know anything about what a positronic net is, then how can you say it's fundamentally different from the computer's design? At best we can refer to Asimov when discussing this, and the only thing we know from him is that for some reason a computer that works with positrons instead of electrons is more efficient. It is, however, still a basic electrical system, using a positive charge instead of a negative one, and requiring magnetic fields to prevent the positrons from annihilating. Maybe the field causes the circuits to act as superconductors? Who knows. Asimov never said anything about a positronic brain using fundamentally unique engineering mechanisms as far as I recall from his books. This leads to objection 3, which is a corollary of 2.

3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean? I could guess what it means, but you seem to state it as a fact, which makes me wonder what facts you're basing your statement on. It's certainly possible you're right, but how can you *know* you're right? Frank Herbert himself was convinced that Humans employ non-linear processing capabilities not replicatable by machine binary processing. Then again, he couldn't have known there was such a thing as quantum computing. And I'm not even convinced that quantum computing is what you might mean when you (and he) discuss "non-linear" processing, which I can we can also call non-binary computing. Even quantum computing, to the extent that I understand, seems to be linear processing in multiple parallel so as to exponential increase processing power. I don't know that the processing is necessarily done in a fundamentally different manner, however. It's not binary, but still appears to be linear. If there is a different kind of processing even than this (that maybe we possess) I don't see that we can even imagine what this is yet, no less ascribe it specifically to Data's circuitry.

4) Since it's never revealed that Data's programming and memory can't be transferred to another identical body, I don't see how you can be so sure that the EMH's easy movement from system to system makes for a basic difference between him and Data. If the Doctor is contained in more primitive computers than Data is then obviously he'd be easier to transfer around, but the distinction then would only be that Data's technology isn't well understood during TNG and thus can't be recreated yet. But this engineering obstacle is only temporary and once Datas could be constructed at will I don't see how you could be sure his personality couldn't be transferred just as easily as that of the EMH. Once Starships have positronic main computers there would seemingly be no difference between them at all, and likewise once android technology is more advanced there seems to be no good reason why the EMH couldn't be transferred into a positronic android body (if someone were to have to bad taste to want to do this :p ). The differences you state in this sense seem to me to be superficial and not really related to any fundamental difference in consciousness between the two of them. They're each contained in a computer system, one being more advanced than the other, but otherwise they are both programs run inside a mechanical housing. Data, like the Doctor, is, quite literally, Data. His naming is hardly a mere reference to the fact that he processes data quickly, I think, since the ship's computer does that too. Now that I think of it, his name somewhat reminds me of Odo's, where each is meant to describe what humanoids thought of their nature when they found them; one as an unknown specimen, and one as data.

5) If anything Data is even more constrained than the Doctor in terms of mannerism and behavior. He cannot use contractions, cannot emulate even basic Human behaviors and mannerisms, and cannot make errors intentionally. By contrast, the Doctor seems to learn much more quickly and is more adaptable to the needs of the crew in their situation. Both he and Data adopt hobbies, but while Data's are merely imitative in their implementation, the Doctor appears to somehow come up with his own schtick and tastes that are not obvious references to famous singers (as Data merely copies a violinist of choice each concert) or to specific instances in his data core. He really does, at the very least, composite data to produce a unique result, as compared to Data, who can't determine a way to do this that is not completely arbitrary. I'm not trying to make a case for the Doctor's sentience, but if you're going to look strictly at their behavior and learning capacity side-by-side, the Doctor's much more closely resembles that of humanoids than Data's does. To be honest, my inclination is to ascribe this to lazy writing on the part of Voyager's writers in not taking his limitations nearly as seriously as the TNG writers did for Data, however what's done is done and we have to accept what was presented as a given. I would say that, at the least, if Data is sentient then so is the Doctor, although my preference would be to suggest that neither is. But William B's point is valid, that a hunch that this is so shouldn't be confused with the certainty needed to withhold basic rights from Data, which is all the court case was deciding. Then again, I see another weakness with the episode as being that the same argument could be turned on its head, with the position being that by default no 'structure' is automatically assigned 'sentient rights' until proven it is sentient. The Federation doesn't, after all, give rights to rocks and shuttlecraft *just in case* they're sentient. In fact, however, their policy (at least as enacted by Picard), appears to be closer to granting rights to any entity demonstrating intelligence at all, whether that's Exocomps, energy beings, tritium-consuming entities, the Calamarain, or any other species that demonstrates the use of logic and perhaps the ability to communicate. Picard's criterion never seems to be sentience, but rather intelligence, and therefore we are never offered an explanation of how this applies to artificial life forms in particular, since some of them (like Exocomps) are treated as having rights based on having "wants", while others, like Moriarty, are not discussed as having innate rights, despite Picard's generous attempts anyhow to preserve his existence. Basically we have no consistent Star Trek position on artificial life forms/programs. Heck, we are even given an inkling Wesley's nanites could develop an intelligence of some kind, even though they clearly have no central processing net such as Data's, and even if they do it wouldn't be as sophisticated. So why is an Exocomp to be afforded rights, but not the doctor, when the Voyager's computer is likely vastly more advanced than the Exocomps were? My main point here is that there is no conclusive evidence given by Star Trek that broadly points to Data as having some unique characteristic that these other technological/artificial beings didn't have, or contrariwise, that if they have it the Doctor specifically doesn't. We just don't have enough information to make a determination about this.

I guess that's enough for now, since I'm even beginning to forget if there are other points to respond to and I haven't the energy right now to reread the thread.
Set Bookmark
Nolan
Sun, Jun 26, 2016, 9:54pm (UTC -5)
Re: Star Trek II: The Wrath of Khan

Everyone and their Grandma always points to the "Khaaaan" yell as THE definative hammy shatner/Kirk delivery, but no one seems to get that he only did that in response to Khan telling him that he'll go back to the dead in space Enterprise and blow it up, leaving him stranded. Which Kirk KNOWS isn't going to happen cause the Enterprise is fine and'll be right around the corner. The KHAAAAAAN!" was purely for the Khan's benefit, selling the ruse of the crew to him.
Set Bookmark
Ivanov
Sun, Jun 26, 2016, 9:36pm (UTC -5)
Re: TNG S3: The High Ground

For those wondering why the Rutians wouldn't grant the Ansatans In independence I have a theory.

Considering that they already have a long trade agreement Federation The Rutians were planning on eventually applying for membership? And the Federation has that arbitrary rule where a planet needs to have a single united government in order to be accepted. Its obvious the Rutians and Ansatans wouldn't get along so the main governments only option was to hold on to the other continent no matter what.

The last part with the kid was cheesy the music certainly doesn't help it.

I like this episode even if Finn conveniently knew about George Washington and the United states.(Isn't Dr Crusher Scottish?) 3 Stars
Set Bookmark
Skeptical
Sun, Jun 26, 2016, 7:24pm (UTC -5)
Re: Star Trek II: The Wrath of Khan

With all these comments, only Jammer has mentioned the reason I think this movie is so great: Ricardo Montalban. His performance here is absolutely fantastic, by far the best performance of any guest actor on Star Trek ever. Every scene he's in, his presence is so commanding you can't help but focus on him. Khan is simply larger than life, not just a person but a villain, a force, an ominous presence that simply cannot be ignored. He makes even Kirk look smaller in comparison.

I heard an interview with Montalban once talking about this role in the movie. He said that the director allowed him to go almost, but not quite, over the top in his performance. And you can see that with every line he says. He never quite reaches the level of ham (unlike Shatner with his infamous "Khaaannn!" shout), so he always seems sincere and always feels like a real person. Yet, despite that, his presence is calculated, showman-like, theatrical.

I mean, consider this line: "He tasks me... he tasks me and I shall have him. I'll chase him round the moons of Nibia and around Antares' Maelstrom and around Perdition's flames before I give him up." That line could have been a huge dud; it sounds kinda cheesy when typed out like this (and yes, I know it's a classical reference). Yet, when Khan says it, it is chilling, it is threatening, it is brilliant.

A part of me is sad that Khan wasn't a TNG-era villain. I'd have paid good money for a 2-hour movie of nothing more than Patrick Stewart and Ricardo Montalban quoting Moby Dick at each other.

But I digress, back to Khan. Like I said, Montalban played him as a theatrical, larger than life villain. Not only was it a brilliant performance that was a joy to watch, but it really makes sense for the character to be played that way. Despite the title, wrath is not the deadly sin that Khan is guilty of, it's pride. It's sheer arrogance in his knowledge that he was genetically superior to the rest of humanity. He is stronger, he is smarter. And in his mind, that makes him flat out better, at everything.

So he spends this movie quoting Moby Dick, putting himself in the position of Ahab. Now, Khan isn't an idiot, he knows Ahab is not supposed to be a hero, not supposed to be someone to identify with. Khan knows that the point of the book was that Ahab's quest for revenge resulted in poor decisions that led to his own doom. So why is he identifying with Ahab here? Why is he putting himself in the position of being the idiot making horrible decisions that will lead to his own death? Simple, he thinks he's better than Ahab. And part of that means he can succeed in his insane vengeance against Kirk where Ahab fails.

This whole plot is basically an adrenaline high for Khan, a way of showing off. Vengeance may be bad for mere mortals, but not for the brilliance that is Khan. Khan can make irrational decisions and come away with victory. Khan can gloat over the hero without it backfiring on him. Khan can make all the same choices that Ahab made, and come out smelling like a rose. Because he is superior to Ahab. He is superior to Kirk. He is so freaking smart that he can outsmart Kirk even when he's being stupid.

That's why he's quoting Moby Dick. That's why he's being theatrical and larger than life. He's showing off. To Kirk, to his crew, to himself. He doesn't just want to rule the galaxy or even defeat Kirk. He wants to rub Kirk's nose in it. He wants to play life on the difficult setting, putting artificial difficulties into his scheme just so that he can show how easily he can overcome them. And he's enjoying every moment of it.

If he had simply listened to Joachim at every step, he would be free to terrorize the galaxy with his ultimate weapon. But he couldn't resist showing off. He couldn't resist toying with and humiliating Kirk. And that was why, in the end, he failed. Because he failed to recognize his own limitations. He failed to check his pride. And it's a darn good thing he failed to do that, because it made this movie so much better. It's why there's no villain in the Star Trek pantheon that can live up to Montalban's Khan. Not Kruge, not Chang, not the Borg Queen (though they may have their style points), and certainly not Sybok or clonePicard or CumberKhan.

Above of else, a villain should be interesting to watch. And you simply cannot stop watching Khan, from the moment he slowly peels off his desert protective clothing to his last dying gasps.
Set Bookmark
DLPB
Sun, Jun 26, 2016, 4:11pm (UTC -5)
Re: TNG S3: The High Ground

@politickz

If you are going to lament Israel's existence be consistent and realize that Muslims have conquered numerous countries that were once Christian or secular or another religion. They aren't apologizing - and the leftists don't take issue with that.
Set Bookmark
DLPB
Sun, Jun 26, 2016, 3:59pm (UTC -5)
Re: TNG S3: The Defector

A good episode. Entertaining and unpredictable. I liked it. It was nice to see both Picard and the defector out maneuvered. Goes to show you that you don't need hitting over the head with moralizing, special effects, or technobabble if the story is up to scratch.
Set Bookmark
Poindexter G
Sun, Jun 26, 2016, 1:15pm (UTC -5)
Re: Star Wars: Episode III — Revenge of the Sith

I think Vaders "Nooooooo!" moment suffers more from poor execution than from being a bad idea (I think that is true of the prequel trilogy as a whole). If you were to read that whole scene to me from the script, I'd probably love it. But something about how it was done on screen is... off. I'm not even sure what it is, it's nothing I can put my finger on. It just isn't quite right.
Set Bookmark
Shaen
Sun, Jun 26, 2016, 11:32am (UTC -5)
Re: VOY S7: Flesh and Blood

My biggest gripe about this episode is how they reused the prop for the artificial life form from "Think Tank" as the photonic field generator.
Set Bookmark
Suicide Q
Sun, Jun 26, 2016, 1:56am (UTC -5)
Re: TNG S5: The Inner Light

I saw it a few times when younger and yes I cried. Now I find it very BORING.
Set Bookmark
Dougie
Sun, Jun 26, 2016, 1:22am (UTC -5)
Re: TOS S1: Arena

"Can you manufacture some sort of rudimentary gun?!"

Considering a great scene from Galaxy Quest owes itself to this episode, I think it deserves a higher rating. Agree with the others this was in my top 3 as a youngster.
Set Bookmark
Ivanov
Sun, Jun 26, 2016, 12:15am (UTC -5)
Re: ENT S1: Two Days and Two Nights

After watching this episode the only memorable plot points for me is Reed and Catfish Tucker get robbed by transvestites. and Hoshi gets some action. completely forgettable episode for me.
Set Bookmark
Ivanov
Sun, Jun 26, 2016, 12:07am (UTC -5)
Re: ENT S2: Stigma

Wow just wow this one was even less subtle than TNG's "The Outcast" I mean there literally referred to as the minority and "melders" who are just born that way and the "normal" Vu;cans irrationally think that they want to trick others into melding with them.

Enterprise continues to make me hate who ever it is who writes Vulcan's Except a select few guest stars. I can't even begin to say how stupid the final conversation is between archer and T'pol.

The Innuendo's between Phlox's wife and Trip where juvenile.
1 Star only because I at least think its funny how obvious they are with there AIDS homosexuality message. There's a reason Iv'e only seen 7 episodes from the first 2 seasons of this show.

Travis finally gets some dialogue even if its for something stupid.
Set Bookmark
William B
Sat, Jun 25, 2016, 10:48pm (UTC -5)
Re: TNG S2: The Measure of a Man

The second "Heroes and Demons" comment you quoted was much more focused on nonlinearity as opposed to the physical enclosure of the brain. So I didn't address that in my comment, which I had started writing before you posted your second comment :) I guess one question is whether nonlinearity would actually be necessary to convincingly simulate human-level intelligence/insight. If it was not, then there is less of a problem; AI which would give the appearance of consciousness would not come up. If it was, then Picard's argument would still largely stand, and then either nonlinearity would need to be eliminated as a requirement for probable sentience or a different argument than the one Picard offers would be required -- unless there is something else in the episode that you think would preclude a more linear computer from gaining some rights in this episode if it displayed the same external traits as Data.
Set Bookmark
William B
Sat, Jun 25, 2016, 10:15pm (UTC -5)
Re: TNG S2: The Measure of a Man

@Andy's Friend:

First of all, I was thinking about your post at times when I was writing the above.

I do think that the meaning of Data's positronic net has some of the connotations you indicate. I do think that there is a very important distinction between Data and the Doctor on this level. And I agree quite strongly that it is important that it is *only* members of Data's "family" who can manipulate him with impunity. In fact it is not just Soong and Lore, but also Dr. Graves from "The Schizoid Man," but even there he is clearly identified as Data's "grandfather." Within this very episode, it is emphasized several times, as you point out, that it has been impossible for Maddox to recreate the feat of Data, and that he hopes that he will recreate it by disassembling Data...which may in fact do nothing to further his abilities along. Further, we know from, e.g., The Schizoid Man, that Data's physical brain can support Graves' personality in a way that the Enterprise computer cannot (the memories are still "there" but the spark is gone). Data can, of course, be manipulated by random events of the week, but we are talking about beings with unknown power, like Q giving him laughter, or the civilization in "Masks" taking him over, or events which affect Data the same as biological beings (like the polywater intoxication in "The Naked Now" or the memory alteration thing in "Conundrum"). I think that a lot of the reason it is important that only members of Data's "family" can affect him to this degree (beyond beings who are shown to have the power to do broader manipulations on all sentient beings, like the Q) is that, as you say, in some respects Data is a child, and the series has (very cleverly) managed to split apart different aspects of Data's growth so that they take place in the series; by holding the keys to Data's growth, Graves, Soong, Tainer and Lore literalize the awesome power that family has to shape us in fundamental ways, which is important metaphorically.

And yet --

I think that the difference here between hardware and software is important, but it does not necessarily mean everything. I do not understand the claim, and the certainty behind it, that sentience requires a body to be "real" rather than a simulation. I presume that you do not require the body to be humanoid, but certainly you seem to indicate that the EMH cannot be sentient because he is simulated by the computer, rather than because he is located inside a body the way Data is. Your claim also that anyone can modify the EMH by changing codes indicates that he is not sentient, again, does not convince me. Certainly it means that the EMH is easier to change, is more dependent on external stimulus. However to some extent this represents simply a different perspective on what it means to be self-aware. The ease with which external pressures can change the EMH matches up with his mercurial, "borderline" personality, constantly being reshaped by environment rather than having a more solid core. Data, despite his ability to modify himself, has a more solid core of identity, but in the time between Data and the EMH's creation (both in-universe, and in our world) the ability of medicine to alter personality, through electrical stimulation or drugs, has increased. You are fond of quoting that we mostly need mirrors, and I think that in most respects, Data and the EMH are there to hold mirrors up to humanity; in both cases, we are looking at aspects of what it means to be human in a technological age, and without recourse to theology to define the "soul," but Data seems to me to reflect the question of what it means to be a person who exists in the physical world, whose selfhood is housed in and dependent on a physical organ, whereas the EMH is something more of the creature of the internet age, where people's sense of self is a little less consciously tied to their *bodies* so much as their online avatars, and some people have become aware of how easily they can be shaped (manipulated?) by information.

What that means, in universe, for the relative sentience of the two is a complicated question. But it does not seem to me that sentience need necessarily be a matter of physical location. I believe that at the time of Data, within the universe, the best computers in Starfleet were just starting to catch up to the ability to simulate human-level intelligences; Minuet is still far ahead of the Enterprise computer *without* the Bynar enhancement, for example, and Moriarty appears sentient but remains something of a curiosity. The EMH is at the vanguard of a new movement, and, particularly when his code is allowed to grow in complexity, he comes to be indistinguishable in complexity from a human. If consciousness is an emergent property -- something which necessarily follows from a sufficiently complex system -- then what would make the EMH not conscious?

The relevance of "artificial intelligence" for Data is that, in this episode, intelligence is one of the three qualities that Maddox gives. Self-awareness is the other. Consciousness is the third, and this is what cannot be identified by an outside party. Perhaps some theory of consciousness could be finalized by the real 24th century which would aid in this, but within Trek it seems as if there is nothing to do but speculate. And so I believe that Picard would make the same argument for the EMH that he makes for Data, and, indeed, for other artificial entities which display the intelligence necessary to be able to more or less function at a humanoid level as well as self-awareness of being able to accurately communicate one's situation. That does not, of course, mean Picard would be correct. But while Picard absolutely argues that Data should have rights because he is conscious (or, rather, *might* be conscious, and the consequences are grave if he does not), he does not attempt to prove that Data is conscious, at all, but rather implicitly demonstrates, using himself as an example, the impossibility of proving consciousness. (This is not *quite* what he does; rather, he demonstrates that Maddox cannot easily prove Picard sentient, and Maddox manages to get out that Picard is self-aware, and we can presume that Maddox would believe Picard to be intelligent, so that still leaves consciousness.)

The question is then whether, according to your model, the episode fails to argue its case -- because no one does strenuously argue that Data's positronic brain is the true distinction between him and the Enterprise computer. This is one of the points that Peter G. argued earlier, that this is the only thing really at issue in this episode, and by extension the episode failed by not properly addressing it. I would argue that this is still not really true, because it is still valuable to come at the problem from the side Picard eventually takes: if Data meets all the requirements for sentience that Maddox can prove apply to Picard, it would be discriminatory and dangerous to deny him the same rights. The arguments presented still hold -- Riker's case that Data remains a machine, created by a man, designed to resemble a man, a collection of heuristic algorithms, etc., remains true, and Picard's case that humans are also machines, that Data has formed connections which could not have been directly anticipated in his original programming (though Soong could have made some sort of "keep a memento from the person you're intimate with" code, even still), and that Data's sentience matches his own remain true. Unless the form of Data's intelligence and heuristic algorithms really do exclude the EMH in some way, it still just seems to me that Picard has not really excluded the EMH in his argument. Since I don't think that Data's artificial brain vs. the complex code which runs the EMH is necessarily a fundamental difference, I don't think this gap in the episode is a problem. But even there I am not certain that the Enterprise computer would not fit some of Picard's argument, except that it is perhaps not as able to learn and adapt and thus would not meet the intelligence requirement.

It is possible that I simply like this episode enough that I'm more interesting in defending it than in getting at the truth -- part of the problem with this sort of, ahem, "adversarial process." But the episode is of course not suddenly worthless if the characters within it make wrong or incomplete arguments. Some of the reason that Picard makes the argument he does is that Data is similar enough to a human that analogies to incidents in human history can be, and are, made, in which differences which seem to be crucial but are later decided were actually superficial are used as a pretext for discrimination and slavery. If artificial consciousness (or artificial intelligence) never gets to the point where an android can be created of the level of sophistication of Data, then the episode still remains relevant as a metaphor for intra-human social issues, which in the end is mostly its primary purpose anyway.

It is worth noting that most forms of intelligence in TNG end up taking on physical form rather than existing in code. The emergent life form in "Emergence" actually gets formed in the holodeck as a physical entity, and that physical entity is then reproduced in the ship and then flown out. The Exocomps which Data goes to save are a little like little brains on wheels, and the connections that they form are, again, *physical* in nature -- they actually replicate the pathways. These along with the positronic brains of Data, Lore, Lal and Julianna suggest that TNG's take does largely match up with yours. However, I am not that certain that the brain being a physical entity is what is important for consciousness. Of the major holographic characters in the show, Minuet is revealed to be a ploy by the Bynars, and whether she is actually conscious or not remains something of a mystery, but even if she is, it is specifically because of the Bynars' extremely advanced and mysterious computer tech, which is gone at the end of the episode. The holographic Leah actually is made to be self-aware, and here we might have to rely, following Picard's argument in the episode, on her not being sufficiently intelligent -- while she can carry on a conversation, she is not actually able to solve the problem (Geordi comes up with the solution), though this is hardly conclusive. Moriarty is the bizarre, exceptional case which "Elementary Dear Data" regards with optimism and "Ship in a Bottle" a more jaded pragmatism, living out his life in a simulation which can be as real as he is, whatever that is. Moriarty really is the most miraculous of these, and the one that most prefigures the Doctor, and really "E,DD" and "Ship in the Bottle" do not so much rule out that Moriarty could genuinely be conscious as supply a one-off solution to a one-off freak occurrence, giving him a happy life, if not exactly the full life he (apparently) wanted in exchange for him not killing them.
Set Bookmark
Joseph B
Sat, Jun 25, 2016, 10:14pm (UTC -5)
Re: Star Trek Into Darkness

This movie was just released in the new 4K UHD Blu-ray format!!
According to reviewers they used the IMAX format (1.78:1) aspect ratio for many of the scenes and they say it looks phenomenal! The package also includes the regular 2K Blu-ray releases for both Star Trek (2009) and "Into Darkness" as well as $8.00 off on a movie ticket to see "Beyond".

So if Jammer was waiting on the 4K iteration of the movie before writing his review, he has now run out of excuses!
Set Bookmark
Andy's Friend
Sat, Jun 25, 2016, 9:11pm (UTC -5)
Re: TNG S2: The Measure of a Man

And:

Sat, Nov 1, 2014, 1:43pm (UTC -5)

"@William B, thanks for your reply, and especially for making me see things in my argumentation I hadn’t thought of myself! :D

@Robert, thanks for the emulator theory. I’m not quite sure that I agree with you: I believe you fail to see an important difference. But we’ll get there :)

This is of course one huge question to try and begin to consider. It is also a very obvious one; there’s a reason ”The Measure of a Man” was written as early as Season 2.

First of all, a note on the Turing test several of you have mentioned: I agree with William, and would be more categorical than him: it is utterly irrelevant for our purposes, most importantly because simulation really is just that. We must let Turing alone with the answers to the questions he asked, and search deeper for answers to our own questions.

Second, a clarification: I’m discussing this mostly as sci-fi, and not as hard science. But it is impossible for me to ignore at least some hard science. The problem with this is that while any Trek writer can simply write that the Doctor is sentient, and explain it with a minimum of ludicrous technobabble, it is quite simply inconsistent with what the majority of experts on artifical consciousness today believes. But...

...on the other hand, the positronic brain I use to argue Data’s artificial consciousness is, in itself, in a way also a piece of that same technobabble. None of us knows what it does; nobody does. However, it is not as implausible a piece of technobabble as say, warp speed, or transporter technology. It may very well be possible one day to create an artificial brain of sorts. And in fact, it is a fundamental piece in what most believe to be necessary to answer our question. I therefore would like to state these fundamental First and Second Sentences:

1. ― DATA HAS AN ARTIFICIAL BRAIN. We know that Data has a ”positronic brain”. It is consistently called a ”brain” throughout the series. But is it an *artificial brain*? I believe it is.

2. ― THE EMH IS A COMPUTER PROGRAM. I don’t belive I need to elaborate on that.

This is of the highest order of importance, because ― unlike what I now see Robert seems to believe ― I think the question of ”sentience”, or artificial consciousness, has little to do with hardware vs software as he puts it, as we shall see.

Now, I’d like to clarify nomenclature and definitions. Feel free to disagree or elaborate:

― By *brain* I mean any actual (human) or fictional (say, the Great Link) living species’ brain, or thought process mechanism(s) that perform functions analogous to those of the human brain, and allow for *non-linear*, cognitive processes. I’m perfectly prepared to accept intelligent, sentient, extra-terrestrial life that is non-humanoid; in fact, I would be very surprised if most were humanoid, and in that respect I am inclined to agree with Stanilaw Łem in “Solaris”. I am perfectly ready to accept radial symmetric lifeforms, or asymmetric, with all the implications to their nervous systems, or even more bizarre and exotic lifeforms, such as the Great Link or Solaris’ ocean. I believe, though, that all self-conscious lifeforms must have some sort of brain, nervous system ― not necessarily a central nervous system ―, or analogues (some highly sophisticated nerve net, for instance) that in some manner or other allows for non-linear cognitive processes. Because non-linearity is what thought, and consciousness ― sentience as we talk about it ― is about.

― By *artificial brain* I don’t mean a brain that faithfully reproduces human neuroanatomy, or human thought processes. I merely mean any artificially created brain of sorts or brain analogue which somehow (insert your favourite Treknobabble here ― although serious, actual research is being conducted in this field) can produce *non-linear* cognitive processes.

― By *non-linear* cognitive process I mean not the strict sense of non-linear computational mechanics, but rather, that ineffable quality of abstract human thought process which is the opposite of *linear* computational process ― which in turn is the simple execution of strings of command, which necessarily must follow as specified by any specific program or subroutine. Non-linear processes are both the amazing strength and the weakness of the human mind. Unlike linear, slavish processes of computers and programs, the incredible wonder of the brain as defined is its capacity to perform that proverbial “quantum leap”, the inexplicable abstractions, non-linear processes that result in our thoughts, both conscious and subconscious ― and in fact, in us having a mind at all, unlike computers and computer programs. Sadly, it is also that non-linear, erratic and unpredictable nature of brain processes that can cause serious psychological disturbances, madness, or even loss of consciousness of self.

These differences are at the core of the issue, and here I would perhaps seem to agree with William, when he writes: ”I don't think that it's at all obvious that sentience or inner life is tied to biology, but it's not at all obvious that it's wholly separate from it, either. MAYBE at some point neurologists and physicists and biologists and so forth will be able to identify some kind of physical process that clearly demarcates consciousness from the lack of consciousness, not just by modeling and reproducing the functioning of the human brain but in some more fundamental way.”

I agree and again, I would go a bit further: I am actually willing to go so far as to admit the possibility of us one day being able to create an *artificial brain* which can reproduce, to a certain degree, some or many of those processes ― and perhaps even others our own human brains are incapable of. Likewise, I am prepared to admit the possibility of sentient life in other forms than carbon-based humanoid. It is as reflections of those possibilities that I see the Founders, and any number of other such outlandish species in Star Trek. And it is as such that I view Data’s positronic brain ― something that somehow allows him many of the same possibilities of conscious thought that we have, and perhaps even others, as yet undiscovered by him. Again, I would even go so far as not only to admit, but to suppose the very real possibility of two identical artificial brains ― say, two copies of Data’s positronic brain ― *not* behaving exactly alike in spite of being exact copies of each other, in a manner similar to (but of course not identical to) how identical twins’ brains will function differently. This analogy is far from perfect, but it is perhaps the easiest one to understand: thoughts and consciousness are more than the sum of the physical, biological brain and DNA. Artificial consciousness must also be more than the sum of a artificial brain and the programming. As such, I, like the researchers whose views I am merely reflecting, not only expect, but require an artificial brain that in this aspect truly equals the fundamental behaviour of sentient biological brains.

It is here, I believe, that Robert’s last thoughts and mine seem to diverge. Robert seems to believe that Data’s positronic brain is merely a highly advanced computer. If this is the case, I wholly agree with his final assessment.

If not, however, if Data’s brain is a true *artificial brain* as defined, what Robert proposes is wholly unacceptable.

IT IS STAR TREK’S FAULT THAT THE QUALITY OF DATA’S BRAIN IS NEVER FULLY ESTABLISHED.

Data’s brain is never established as a true artificial brain. But it is never established a merely highly advanced computer, either. It is once stated, for instance, that his brain is “rated at...” But this means nothing. This is a mere attempt at assessing certain faculties of his capacities, while wholly ignoring others that may as yet be underdeveloped or unexplored. It is in a way similar to saying of a chess player that he is rated at 2450 ELO: it tells you precious little about the man’s capacities outside the realm of chess.

We must therefore clearly understand that brains, including artificial brains, and computers are not the same and don’t work the same way. It is not a matter of orders of magnitude. It is not a matter of speed, or capacity. It is not even a matter of apples and oranges.

I therefore would like to state my Third, Fourth, Fifth and Sixth Sentences:

3. ― A BRAIN IS NOT A COMPUTER, and vice-versa.

4. ― AN ARTIFICIAL BRAIN IS NOT A COMPUTER, and vice versa.

5. ― A COMPUTER IS INCAPABLE OF THOUGHT PROCESSES. It merely executes
programs.
6. ― A PROGRAM IS INCAPABLE OF THOUGHT PROCESSES. It merely consists of linear strings of commands.

Here is finally the matter explained: a computer is merely a toaster, a vacuum-cleaner, a dish-washer: it always performs the same routine function. That function is to run various computer programs. And the computer programs ― any program ― will always be incapable of exceeding themselves. And the combination computer+program is incapable of non-linear, abstract thought process.

To simplify: a computer program must *always* obey its programming, EVEN IN SUCH CASES WHEN THE PROGRAMMING FORCES RANDOMIZATION. In such cases, random events ― actions and decisions, for instance ― are still merely a part of that program, within the chosen parametres. They are therefore only apparently random, and only within the specifications of the program or subroutine. An extremely simplified example:

Imagine that in a given situation involving Subroutine 47 and a A/B Action choice, the programming requires that the EMH must:

― 35% of the cases: wait 3-6 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 10-15 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 10% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose RANDOMLY.
― 5% of the cases: wait 60-90 seconds as if considering Actions A and B, then choose RANDOMLY.
― 6% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 10-15 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 3-6 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47

In a situation such as this simple one, any casual long term observer would conclude that the faster the subject/EMH took a decision, the more likely it would be the right one ― something observed in most good professionals. Every now and then, however, even a quick decision might prove to be wrong. Inversely, sometimes the subject might exhibit extreme indecision, considering his options for up to a minute and a half, and then having even chances of success.

A professional observer with the proper means at his disposal, however, and enough time to run a few hundred tests, would notice that this subject never, ever spent 7-9 seconds, or 16-19 seconds before reaching a decision. A careful analysis of the response times given here would show results that could not possibly be random coincidences. If it were “Blade Runner”, Deckard would have no trouble whatsoever in identifying this subject as a Replicant.

We may of course modify the random permutations of sequences, and adjust probabilities and the response times as we wish, in order to give the most accurate impression of realism compared to the specific subroutine: for a doctor, one would expect medical subroutines to be much faster and much more successful than poker and chess subroutines, for example. Someone with no experience in cooking might injure himself in the kitchen; but even professional chefs cut themselves rather often. And of course, no one is an expert at everything. A sufficiently sophisticated program would reflect all such variables, and perfectly mimic the chosen human behaviour. But again, the Turing test is irrelevant:

All this is varying degrees of randomization. None of this is conscious thought: it is merely strings of command to give the impression of doubt, hesitation, failure and success ― in short, to give the impression of humanity.

But it’s all fake. It’s all programmed responses to stimuli.

Now make this model a zillion times more sophisticated, and you have the EMH’s “sentience”: a simple simulation, a computer program unable to exceed its subroutines, run slavishly by a computer unable of any thought processes.

The only way to partially bypass this problem is to introduce FORCED CHAOS: TO RANDOMIZE RANDOMIZATION altogether.

It is highly unlikely, however, that any computer program could long survive operating a true forced chaos generator at the macro-level, as opposed to limited forced chaos to certain, very specific subroutines. One could have forced chaos make the subject hesitate for forty minutes, or two hours, or forever and forfeit the game in a simple position in a game of chess, for example; but a forced chaos decision prompting the doctor to kill his patient with a scalpel would have more serious consequences. And many, many simpler forced chaos outcomes might also have very serious consequences. And what if the forced chaos generator had power over the autoprogramming function? How long would it take before catastrophic failure and cascading systems failure would occur?

And finally, but also importantly: even if the program could somehow survive operating a true forced chaos generator, thus operating extremely erraticly ― which is to say, extremely dangerously, to itself and any systems and people that might depend on it ―, it would still merely be obeying its forced chaos generator ― that is, another piece of strings of command.

So we’re back where we started.

So, to repeat one of my first phrases from a previous comment: “It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.” And the matter is, that the EMH simply *does not think*. The program simulates realistic responses, based on programmed responses to stimuli. That’s all. This is not thought process. This is not having a mind.

So it follows that I don’t agree when Peremensoe writes what Yanks also previously has commented on: "So Doc's mind runs on the ship computer, while Data's runs on his personal computer in his head. This is a physiological difference between them, but not a philosophical one, as far as I can see. The *location* of a being's mind says nothing about its capacity for thought and experience."

The point is that “Doc” doesn’t have a “mind”. There is therefore a deep philosophical divide here. The kind of “mind” the EMH has is one you can simply print on paper ― line by line of programming. That’s all it is. You could, quite literally, print every single line of the EMH programming, and thus literally read everything that it is, and learn and be able to calculate its exact probabilities of response in any given, imaginable situation. You can, quite literally, read the EMH like a book.

Not so with any human. And not so, I argue, with Data. And this is where I see that Robert, in my opinion, misunderstands the question. Robert writes: “Eventually hardware and an OS will come along that's powerful enough to run an emulator that Data could be uploaded into and become a software program”. This only makes sense if you disregard his artificial brain, and the relationship between his original programming and the way it has interacted with, and continues to interact with that brain, ever expanding what Data is ― albeit rather slowly, perhaps as a result of his positronic brain requiring much longer timeframes, but also being able to last much longer than biological brains.

So I’ll say it again: I believe that Data is more than his programming, and his brain. His brain is not just some very advanced computer. Somehow, his data ― sensations and memories ― must be stored and processed in ways we don’t fully understand in that positronic brain of his ― much like the Great Link’s thoughts and memories are stored and processed in ways unknown to us, in that gelatinous state of theirs.

I therefore doubt that Data’s program and brain as such can be extracted and emulated with any satisfactory results, any more than any human’s can. Robert would like to convert Data’s positronic brain into software. But who knows if that is any more possible than converting a human brain into software? Who knows whether Data’s brain, much like our own, can generate thought processes that are inscrutable and inexplicable that surpass its construction?

So while the EMH *program* runs on some *computer*, Data’s *thoughts* somehow flow in his *artificial brain*. This is thus not a matter of location: it’s a matter of essence. We are discussing wholly different things: a program in a computer, and thoughts in a brain. It just doesn’t get much more different. In my opinion, we are qualitatively worlds apart. "
Set Bookmark
Andy's Friend
Sat, Jun 25, 2016, 9:04pm (UTC -5)
Re: TNG S2: The Measure of a Man

@All

You have to go much further. You have to stop talking about artificial intelligence, which is irrelevant, and begin discussing artificial consciousness.

Allow me to copy-paste a couple of my older posts on "Heroes and Demons" (VOY). I recommend the whole discussion there, even Elliott's usual attempts to contadict me (and everyone else; he was the rather contrarian fellow). Do note that "body & brain," as I later explain on that thread, is a stylistic device: it is of course Data's positronic brain that matters.


Fri, Oct 31, 2014, 1:29pm (UTC -5)

"@Elliott, Peremensoe, Robert, Skeptikal, William, and Yanks

Interesting debate, as usual, between some of the most able debaters in here. It would seem that I mostly tend to agree with Robert on this one. I’m not sure, though; my reading may be myopic.

For what it’s worth, here’s my opinion on this most interesting question of "sentience". For the record: Data and the EMH are of course some of my favourite characters of Trek, altough I consider Data to be a considerably more interesting and complex one; the EMH has many good episodes and is wonderfully entertaining ― Picardo does a great job ―, but doesn’t come close to Data otherwise.

I consider Data, but not the EMH, to be sentient.

This has to do with the physical aspect of what is an individual, and sentience. Data has a body. More importantly, Data has a brain. It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.

Peremensoe wrote: ”This is a physiological difference between them, but not a philosophical one, as far as I can see.”

I cannot agree. I’m sure that someday we’ll see machines that can simulate intelligence ― general *artificial intelligence*, or strong AI. But I believe that if we are ever to also achieve true *artificial consciousness* ― what I gather we mean here by ”sentience” ― we need also to create an artificial brain. As Haikonen wrote a decade ago:

”The brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers.”

This is the main difference between Data and the EMH, and why this physiological difference is so important. Data possesess an artificial brain ― artificial neural networks of sorts ―, the EMH does not.

Data’s positronic brain should thus allow him thought processes somehow similar to those of humans that are beyond the EMH’s capabilities. The EMH simply executes Haikonen’s ”programmed strings of commands”.

I don’t claim to be an expert on Soongs positronic brain (is anyone?), and I have no idea about the intricate differences and similarities between it and the human brain (again: does anyone?). But I believe that his artificial brain must somehow allow for some of the same, or similar, thought processes that cause *self-awareness* in humans. Data’s positronic brain is no mere CPU. In spite of his very slow learning curve in some aspects, Data consists of more than his programming.

This again is at the core of the debate. ”Sentience”, as in self-awareness, or *artificial consciousness*, must necessarily imply some sort of non-linear, cognititive processes. Simple *artificial intelligence* ― such as decision-making, adapting and improving, and even the simulation of human behaviour ― must not.

The EMH is a sophisticated program, especially regarding prioritizing and decision-making functions, and even possessing autoprogramming functions allowing him to alter his programming. As far as I remember (correct me if I’m wrong), he doesn’t posses the same self-monitoring and self-maintenance functions that Data ― and any sentient being ― does. Even those, however, might be programmed and simulated. The true matter is the awareness of self. One thing is to simulate autonomous thought; something quite different is actually possessing it. Does the fact that the EMH wonders what to call himself prove that he is sentient?

Data is essentially a child in his understanding of humanity. But he is, in all aspects, a sentient individual. He has a physical body, and a physical brain that processes his thoughts, and he lives with the awareness of being a unique being. Data cannot exist outside his body, or without his positronic brain. If there’s one thing that we learned from the film ”Nemesis”, it’s that it’s his brain, much superior to B-4’s, that makes him what he is. Thanks to his body, and his brain, Data is, in every aspect, an independent individual.

The EMH is not. He has no body, and no brain, but depends ― mainly, but not necessarily ― on the Voyager computer to process his program. But more fundamentally, he depends entirely on that program ― on strings of commands. Unlike Data, he consists of nothing more than the sum of his programming.

The EMH can be rewritten at will, in a manner that Data cannot. He can be relocated at will to any computer system with enough capacity to store and process his program. Data cannot ― when Data transfers his memories to B-4, the latter doesn’t become Data. He can be shaped and modelled and thrown about like a piece of clay. Data cannot. The EMH has, in fact, no true personality or existence.

Because he relies *entirely* on a string of commands, he is, in truth, nothing but that simple execution of commands. Even if his program compels him to mimic human behaviour with extreme precision, that precision merely depends on computational power and lines of programming, not thought process.

Of course, one could argue that the Voyager’s computer *is* the EMH’s brain, and that it is irrelevant that his memories, and his program, can be transferred to any other computer ― even as far as the Alpha Quadrant, as in ”Message in a Bottle” and ”Life Line”.

But that merely further annihilates his individuality. The EMH can, in theory, if the given hardware and power requirements are met, be duplicated at will at any given time, creating several others which might then develop in different ways. However ― unlike say, Will and Thomas Riker, or a copy of Data, or the clone of any true individual ―, these several other EMHs might even be merged again at a later time.

It is even perfectly possible to imagine that several EMHs could be merged, with perhaps the necessary adjustments to the program (deleting certain subroutines any of them might have added independently in the meanwhile, for example), but allowing for multiple memories for certain time periods to be retained. Such is the magic of software.

The EMH is thus not even a true individual, much less sentient. He’s software. Nothing more.

Furthermore, something else and rather important must also be mentioned. Unless our scope is the infinite, that is, God, or the Power Cosmic, to be sentient also means that you can lose that sentience. Humans, for a variety of reasons, can, all by themselves and to various degrees, become demented, or insane, or even vegetative. A computer program cannot.

I’m betting that Data, given his positronic brain, could, given enough time, devolve to something such as B-4 when his brain began to fail. Given enough time (as he clearly evolves much slower than humans, and his positronic brain would presumably last centuries or even millennia before suffering degradation), Data could actually risk losing his sanity, and perhaps his sentience, just like any human.

The EMH cannot. The various attempts in VOY to depict a somewhat deranged EMH, such as ”Darkling”, are all unconvincing, even if interesting or amusing: there should and would always be a set of primary directives and protocols that would override all other programming in cases of internal conflict. Call it the Three Laws, or what you will: such is the very nature of programming. ”Darkling”, and other such instances, is a fraud. It is not the reflex of sentience; it is, at best, the result of inept programming.

So is ”Latent Image”. But symptomatically, what do we see in that episode? Janeway conveniently rewrites the EMH, erasing part of his memory. This is consistent with what we see suggested several times, such as concerning his speech and musical subroutines in ”Virtuoso”. Again, symptomatically, what does Torres tell the EMH in ”Virtuoso”?

― TORRES: “Look, Doc, I don't know anything about this woman or why she doesn't appreciate you, and I may not be an expert on music, but I'm a pretty good engineer. I can expand your musical subroutines all you like. I can even reprogramme you to be a whistling teapot. But, if I do that, it won't be you anymore.”

This is at the core of the nature of the EMH. What is he? A computer program, the sum of lines of programming.

Compare again to Data. Our yellow-eyed android is also the product of incredibly advanced programming. He also is able to write subroutines to add to his nature and his experience; and he can delete those subroutines again. The important difference, however, is that only Soong and Lore can seriously manipulate his behaviour, and then only by triggering Soongs purpose-made devices: the homing device in ”Brothers”, and the emotion chip in ”Descent”. There’s a reason, after all, why Maddox would like to study Data further in ”Measure of a Man”. And this is the difference: Soong is Soong, and Data is Data. But any apt computer programmer could rewrite the EMH as he or she pleased.

(Of course, one could claim than any apt surgeon might be able to lobotomise any human, but that would be equivalent to saying that anyone with a baseball bat might alter the personality of an human. I trust you can see the difference.)

I believe that the EMH, because of this lack of a brain, is incapable of brain activity and complex thought, and thus artificial consciousness. The EMH is by design able to operate from any computer system that meets the minimum requirements, but the program can never be more than the sum of his string of commands. Sentience may be simulated ― it may even be perfectly simulated. But simulated sentience is still a simulation.

I thus believe that the EMH is nothing but an incredibly sophisticated piece of software that mimics sentience, and pretends to wish to grow, and pretends to... and pretends to.... He is, in a way, The Great Pretender. He has no real body, and he has no real mind. As his programming evolves, and the subroutines become ever more complex, the illusion seems increasingly real. But does it ever become more than a simulacrum of sentience?

All this is of course theory; in practical terms, I have no problem admitting that a sufficiently advanced program would be virtually indistinguishable, for most practical purposes, from actual sentience. And therefore, *for most practical purposes*, I would treat the impressive Voyager EMH as an individual. But as much as I am fond of the Doctor, I have a very hard time seeing him as anything but a piece of software, no matter how sophisticated.

So, as you can gather by now, I am not a fan of such thoughts on artificial consciousness that imply that it is all simply a matter of which computations the AI is capable of. A string of commands, however complex, is still nothing but a string of commands. So to conclude: even in a sci-fi context, I side with the ones who believe that artificial consciousness requires some sort of non-linear thought process and brain activity. It requires a physical body and brain of sorts, be it a biological humanoid, a positronic android, the Great Link, the ocean of Solaris, or whatever (I am prepared to discuss non-corporeal entities, but elsewhere).

Finally, I would say that the bio gel idea, as mentioned by Robert, could have been interesting in making the EMH somehow more unique. That could have the further implication that he could not be transferred to a computer without bio gel circuitry, thus further emphasizing some sort of uniqueness, and perhaps providing a plausible explanation for the proverbial ”spark” of consciousness ― which of course would then, as in Data’s case, have been present from the beginning. This would transform the EMH from a piece of software into... perhaps something more, that was interwoven with the ship itself somehow. It could have been interesting ― but then again, it would also have limited the writing for the EMH very severely. Could it have provided enough alternate possibilities to make it worthwhile? I don’t know; but I can understand why the writers chose otherwise"
Set Bookmark
Luka
Sat, Jun 25, 2016, 6:37pm (UTC -5)
Re: DS9 S4: Accession

When Sisko and Laan are discussing settling who the real emissary is, I laughed to myself and said they should've played a game of darts to determine it. All kidding aside though this was a damn great episode. O'Brien trying to adjust to being a family man and leaving his bachelor life behind. Usually the Bajoran episodes can be a bit dull but this one was really good.
Set Bookmark
Anonymous
Sat, Jun 25, 2016, 6:22pm (UTC -5)
Re: VOY S6: Tsunkatse

What bugged me in this episode is that a Kradin is seen for a short period of time. I always assumed that the beastly appearance was as a result of the mental manipulation that Chakotay suffered, but apparently, this is what they look like. Shame.
Set Bookmark
Thomas
Sat, Jun 25, 2016, 5:42pm (UTC -5)
Re: TNG S6: Time's Arrow, Part II

I hate this 2-parter for one single reason: CLEMENS! His only reason for being there is to be a bonehead, and his snarly, nasal voice feel like shards of glass being driven into my ears. Take him out of it, and it could have been a decently funny 2-parter, but as it stands: NO! Just NO!
Set Bookmark
John C. Worsley
Sat, Jun 25, 2016, 5:17pm (UTC -5)
Re: VOY S7: Renaissance Man

Agree with the dissenters. Lousy episode; some entertaining elements in a vacuum but the core is infuriatingly stupid. They really couldn't come up with a good excuse for the action? A being this powerful and this easily manipulated should not have security clearance of any kind.
Set Bookmark
tlb
Sat, Jun 25, 2016, 1:03pm (UTC -5)
Re: Star Trek: Generations

After watching DS9 all the way through several times, it was very disconcerting to hear Picard say "What you leave behind..."
Set Bookmark
Peter
Sat, Jun 25, 2016, 7:00am (UTC -5)
Re: TOS S1: The Man Trap

All right! Been a long-time lurker on this site and am impressed with your reviews on Trek, of which I am perhaps one of the biggest fans. You should expect many of my comments on some reviews if only to voice how much I enjoy it.
As for The Man Trap, this review is spot-on. I admit I overrate it a bit but 2.5 is about right.
All the best- Peter.
Next ►Page 1 of 1,311
▲Top of Page | Menu | Copyright © 1994-2016 Jamahl Epsicokhan. All rights reserved. Unauthorized duplication or distribution of any content is prohibited. This site is an independent publication and is not affiliated with or authorized by any entity or company referenced herein. See site policies.