Comment Stream

Search and bookmark options Close
Search for:
Search by:

Total Found: 32,785 (Showing 1-25)

Next ►Page 1 of 1,312
Set Bookmark
Nolan
Tue, Jun 28, 2016, 2:13am (UTC -5)
Re: VOY S2: Threshold

@Skywalker

One of the main issues people have with this episode is that the Voyager crew invents a way to get home by going Warp 10, however it has horrendous side-effects. Which the Doctor then cures. So why aren't they home next week? Sure the crew'll turn into lizards (because of evolution that shouldn't work that way) but then the Doctor could just cure the crew once in the Alpha Quadrant.
Set Bookmark
NCC-1701-Z
Tue, Jun 28, 2016, 12:51am (UTC -5)
Re: New Trek Series Coming in 2017

New updates from Fuller himself:

-13 episode season, tied up in a story arc. Sounds like he's trying to move away from self contained episodic stories.
-Not an anthology series.
-Not set between Undiscovered Country and TNG.

trekmovie.com/2016/06/23/fuller-clarifies-star-trek-2017-not-anthology-series-reveals-more-details/

Separately, Brent Spiner told IGN that he would be open to playing a role on the new series, much like how he played Arik Soong on Enterprise.

Thoughts, anyone?
Set Bookmark
Skywalker
Mon, Jun 27, 2016, 8:21pm (UTC -5)
Re: VOY S2: Prototype

Heh, a lot of the annoying stuff Jammer notes about the episode — the excessive technobabble, the idiotic refrains of the same bridge battle scenes — this stuff didn't bother me as a kid. And I was wondering why the little kid version of me didn't mind.

Then I remembered: that's exactly the kind of stuff I made up when I played and used my imagination as a little kid and pretended I was a starship captain or whatever. If you have seen the Pixar movie Up, you'll remember that little Carl in the beginning plays with his toy airship in the exact same way.

So basically, my conclusion is that the VOY writers have the creative skills of preadolescents.
Set Bookmark
Peter G.
Mon, Jun 27, 2016, 7:20pm (UTC -5)
Re: TNG S2: The Measure of a Man

@ Andy's Friend, I still don't know why you're hung up on whether or not Data has an artificial "brain". You have yet to define what that means. Are you quite sure you're really talking about something, as opposed to issuing phrases that sound like something but have no content? If there's content then why not just say what it is instead of using placeholders? You haven't even provided an explanation for why the Human brain isn't just a sophisticated computer. Until you can answer my very clear point-blank question in any way (about what non-linear processing is) I'll assume you're not really interested in talking about this. I also assume from your lack of confirmation that you are not an expert in the field of robotics, information theory, etc etc.

Incidentally, I find this particular line somewhat accursed:

"...by saying that that sounds an awful lot like wishful thinking. By that I mean that this is a little bit like discussing religion. If you strongly believe that (I’m not saying that William does), nothing I can say will change your mind."

If positing a theory about robotics makes someone a 'religious believer' that you can't communicate with, then I find it hard to believe you are making such absolute declarate statements with a straight face. If you know something special about this then own it and lay it down for us. The condescension needs real creds to back it up, my man.

Set Bookmark
Andy's Friend
Mon, Jun 27, 2016, 6:05pm (UTC -5)
Re: TNG S2: The Measure of a Man

@William B & Peter G.

I was writing to Peter, but I'll answer William's last comment first because it can be done very quickly: I basically agree with everything you wrote.

I think you're quite right about the episode being robbed of its power without uncertainty. It dares ask great questions. It follows it should not provide certain answers. And you are right: knowingly believing in something uncertain is a very powerful thing. It is what makes faith, true faith, indestructible.

I also think you're very, very right regarding Data's multiple roles, as in a mascot of the autist & Asperger's communities. What makes Data so fantastic is that he is so many people in one: the Child, the Good Brother, the Autist... The Android is actually pretty far down the list in importance. This is undoubtedly why he is so beloved: most of us can find a part of ourselves in him. Mirrors, was it, William?

I have a little more difficulty in seeing the Doctor in quite the same multi-faceted fashion. Every Star Trek fan I know likes the Doctor a lot, but for very different reasons that they like Data: they do not receive the same kind of love.

I particularly like your reference to Q, because, as you'll remember, that is my recurring theme: the humanoid & the truly alien. And you're of course right: any truly alien might question our human consciousness; and, if we widen our scope, what I have called the "artificial brain" is merely a word for some sort of cognitive architecture which may be very different from our own. The Great Link seem to have one, and I'm pretty sure it's quite different from Data's brain.

Also, and this is answering both of you now, it is true that we cannot know with absolute certainty that Data's "positronic" brain is an artificial brain. There are strong indications that it is, but we cannot know for sure; and it is true that Data, too, could simply be another Great Pretender.

This leads me to that most interesting aspect: faith. I was going to answer William earlier:

WILLIAM B―"I think that a system sufficiently sophisticated to simulate "human-level" (for lack of a better term) sentience may have developed sentience as a consequence of that process."

...by saying that that sounds an awful lot like wishful thinking. By that I mean that this is a little bit like discussing religion. If you strongly believe that (I’m not saying that William does), nothing I can say will change your mind. There are still highly intelligent scientists who share that belief, in spite of all the advances we've made in the past decades in both neuroscience and computer science. It is, quite simply, a belief, akin to a spiritual one. Some people *want to believe* that strings of code, like lead, can turn into gold.

But that of course is a bit like my belief that Data's positronic brain is an artificial brain, i.e., some sort of cognitive architecture affording him consciousness. I, too, *want to believe* that he has that artificial brain. Because to me, Data would lose his magic, and all his beauty, were it not so. As I wrote, there are very strong indications that this interpretation is a correct one; but as in religion, I have no proof, and I must admit that it is, ultimately, also an act of faith of sorts. I want Data to be alive. To me, Data wouldn't make much sense otherwise. And I know full well that this is, deep down, a religious feeling.


I'm sorry, guys, it's getting late here in Europe... Until next time :)
Set Bookmark
KB
Mon, Jun 27, 2016, 5:16pm (UTC -5)
Re: ENT S3: Similitude

TV rarely makes me cry. When it does, I honor the work of those who created it.

The DNA doesn't include memories argument has an answer--epigenetics. Relatively new data show that PTSD can be transmitted from parents. Our DNA and bodies are far more plastic and adaptable than our current science dreams of...

Our current cloning technology doesn't do this but his do we know it can't?

In addition to honoring stories that spontaneously cause me to laugh or cry, I also honor those that invite vigorous debate. Therefore, this one is 4.
Set Bookmark
Skywalker
Mon, Jun 27, 2016, 4:20pm (UTC -5)
Re: VOY S2: Threshold

Yeah, I'm kind of torn here. Really, the pseudo science involved with writing warp drive isn't really any more contrived here than how it was in TNG "Force of Nature;" I like it when Trek has plausible science, but that doesn't mean it will make a good story.

I guess it's really a death by a thousand stings situation. The evolutionary nonsense, the ease of getting to warp 10 (which can be explained away with the special dilithium crystals they found), and also the fact that Federation scientists has never managed it either (which might also be explained away if Starfleet's secret Skunk Works equivalent team actually did achieve warp 10, but after the debilitating mutations never attempted it again and never published the data). Even with my hand-wavy explanations, which in any case went in the show, it's just a little too much.

But zero stars? MacNeil's acting is great! So is everyone else's. The pacing and direction are good.

If we did indeed do as we all would like, and excise Threshold from the ST cannon, then we have a less horrible continuity, but we also have a single sci-fi show called "Threshold," which stands on its own as being passable. Then we might judge it as a modern 2001 meets The Fly. High concept and bizarre, but not all together terrible.

The only truly damning aspect of this episode is in one of the first scenes, they don't list the most obvious aviation/space hero of all time! Chuck freaking Yeager! How could they miss that one?!
Set Bookmark
William B
Mon, Jun 27, 2016, 3:55pm (UTC -5)
Re: TNG S2: The Measure of a Man

Here is a somewhat off-topic comment expanding on my point (4). Notably, I do not expect this to be answered within the thread, necessarily, but I think it's important for me to say a bit more of what I think this episode is about, and why it is important, as well as what I think Data (and the EMH) are about, as characters, and why they are important, in addition to being about artificial consciousness/intelligence/life etc. issues.

As far as the charge of us talking Trek speak rather than real life speak: well, certainly real life issues are the most important. However, I think it's fair to say that all Trek is commenting on the real world, it is just that some of it is commenting more directly than others. Data, the EMH etc. are obviously representations of real-world ideas, to some degree or another, but it's a question of how we interpret them. That Data is definitely a representation of a potential being with an "artificial brain" is by no means certain. It is also very possible that the (limited, contradictory) take on artificial consciousness within Trek is primarily there to talk about other aspects of human life -- how humans treat each other, how we treat other (biological) life forms on the planet, etc. -- and so for those purposes, the differences between Data and the EMH might not be important at all -- they might just be different views on the human condition.

There are autistic and Asperger's communities which have used Data as a sort of mascot. Data's experience of difficulty understanding "human," i.e. normative, emotions, his alienation, and other traits like that have been taken on as representative of some humans who find this "artificial life form" a good representation of their experience. This was not exactly the intent of the character, but I think that part of the mythical basis for artificial beings is to talk about difficult aspects of our plight as humans -- of being physical, material beings whose worth is often decided collectively by our ability or inability to fit in with larger conceptions of humanity. Again, Louvois' ruling in this episode includes: "Does Data have a soul? I don't know that he has. I don't know that I have." There is no reason that the EMH is necessarily precluded from being a particular kind of representation of person, in which case I think it may be missing the point to declare him as non-conscious. You can say that this should *not* be the point of Voyager, that in its portrayal of computer coding it should hew more closely to what experts believe is the ultimate signifiers of consciousness, and you may be correct, but I think that in order to talk about what Trek means we have to suss out what it is trying to say and how that relates to our world.

As I see it, part of the problem with indicating that an "artificial brain" is the key difference between Data and the EMH, and one which would simply destroy Maddox, is that it is to some degree divorced from human experience up to this point. That does not mean that it won't be proven in the future, and that moment might fundamentally change human existence. But the majority of human existence has been a matter of taking blind stabs in the dark, trying to reason outward from ourselves to beings sufficiently similar to ourselves and to extend to them the things we would like extended to us. And that comes with it a level of uncertainty about ourselves. *I do not know that I have a soul.* The fact that the "code" that runs in our brains is sufficiently different from that in a computer does not guarantee that we are not simply an advanced computer or that we are not a mere set of physical processes designed for self-replication, with our experience of consciousness being some sort of nearly irrelevant by-product of a self-sustaining system, ultimately no more intrinsically meaningful than a process like fire. One of the key things that TNG does is introduce, from the very first episode, the idea that it is also not merely a matter of humanity deciding which other entities should have rights, but that we have a responsibility to prove that we as a species demonstrate qualities that make us more than children groping about. The Q could easily, and do, look at us and declare, how could a lump of matter with a bunch of electrical processes controlling a bit of organic machinery be conscious in any meaningful sense? That there is no absolute certainty that Data is the same as humans is important, because this is partly a way of looking, anew, at things we take for granted about *humans*, of stripping away which traits are ultimately irrelevant in defining our place in the universe. Granting that Data has value is a way of granting that we have value, through an act of faith. If the whole process is genuinely reduced to a matter of a binary switch wherein some given object either is or is not a brain, *and that this can be verified with certainty*, robs the story of much of its power and also removes the uncertainty that is near the heart of the human condition.

As Picard asks Maddox, I would say: "Are you sure?" Are you sure that the artificial brain correctly distills the essence of what is important about humanity, consciousness, and rights-having beings? Now, of course, I may be misinterpreting your claims. But I think that it is not merely a matter of Snodgrass hedging her bets for the sake of drama that an episode entitled "The Measure of a Man" avoids some sort of ultimate determiner of worth in consciousness. I think that the uncertainty is a fundamental part of human existence, and important to every person who has ever wondered, in real life, if they do not matter, and had to take a leap of faith to believe that they did, as individuals and as a species. If there is some sort of "magic bullet" to the consciousness debate wherein the, or a, physical mechanism of *all* consciousness is identified, then probably the debate over which beings qualify as life forms will shift away from "consciousness" and into another trait which is, once again, mysterious. Once the physical mechanisms of consciousness are sufficiently identified, after all, then we might well understand it enough to be able to do away with discussing human behaviour in terms of a "spark" instead of the result of eminently comprehensible physical processes, albeit very complex ("nonlinear") ones, which may again require us to take a leap of faith to believe ourselves more than just the physical process that governs us.
Set Bookmark
KB
Mon, Jun 27, 2016, 3:12pm (UTC -5)
Re: ENT S3: North Star

I enjoyed this episode a lot. Some points:

Archer, despite being on mission to save earth, is doing intensive scanning looking for Xindi. Therefore, I imagine he felt that he could afford two days to investigate humans in the Expanse!

Given the time frame and cultures he found, it is unsurprising he encountered "routine" western experiences. And the cultures are explicable because of the whole slave/slave revolt/burn everything that enslaved us events that occurred. A key comment was "they abducted the wrong people." I would imagine that just surviving on this particular planet would be very challenging and we have to remember that life expectancy has a lot to do with technological and social innovation.

It would have been nice to have discussed the slave/indigenous issues that existed in mid 19th century US but that pulls the plot away from the basic conflict that exists in this planet today.

I thought the horse thing was funny.

I liked not having edges tied up and that Archer makes no promises because he knows that if he fails his larger mission, no ships will come for these people. What he does do is to leave materials and ideas that can help these cultures move forward constructively.

It would have been nice to find other Skagorans later on their home world or on colonies to learn more about them.

And, I imagine that given how isolated some parts of West were, that some alien abductions could happen without a pattern being evident.

Another episode that allows me to think about possible sequels--this always moves it to 3 stars
Set Bookmark
William B
Mon, Jun 27, 2016, 3:07pm (UTC -5)
Re: TNG S2: The Measure of a Man

@Andy's Friend:

In addition to what Peter G. said, I want to clarify my position a bit. I appreciate your comments very much and I think you may be onto something with regards to what Data is and represents. However:

1) I think it's still by no means made absolutely clear that Data has an "artificial brain" which meets the specifications that you state it does. Don't get me wrong. I am happy to believe that Data does. And with Graves, as we discussed, there is an indication that Data's brain can support a human identity better than the ship's computer can. However, we are still left with the possibility that Data's positronic brain is simply better at processing and reproducing human-like behaviours. We do not know for sure that Data actually is housing Graves, rather than simulating him in a way that is indistinguishable from the real thing. Nor do we know that Data is not generally simulating consciousness rather than actually doing so. Similarly, that Data's positronic brain is very hard to reproduce, and causes cascade failures in the case of Lal, etc., is no guarantee.

The references to Data's positronic brain are many and this supports your contention that there is something about Data's brain that is capable of consciousness in a way that traditional computers lack. However, there are other explanations. They may simply call Data's positronic brain a brain because, well, he is an android, designed in the shape of a human. The control centre of the automaton "body" made to resemble a human is located in the part which is made to resemble a head, and performs a function which is at least superficially similar to a humanoid brain, and thus it can be called a brain.

You said in an earlier comment that it is the fault of the show that it fails to establish what Data's artificial brain does. This still assumes that it is a settled issue that Data *has* an artificial brain, rather than something which people, for convenience, call an artificial brain. It may not be. Even if we accept your premises as definitely true -- which, perhaps you have more expertise in this area than we do -- it is still a leap that Data fits this definition you are stating. Maybe he doesn't *really* have a "nonlinear" brain, and according to the proposal you have put forward, Data only simulates consciousness.

And even then, even if he definitely has an artificial brain, I don't think it's absolutely true that if Data does have something which was *designed to be an artificial brain*, that this would tank Maddox' argument. What you are stating, essentially, is that it will at some point be possible to distinguish between what is actually conscious and what isn't not based on behaviour or anything, but based on the physical make of the object itself. This may turn out to be true, and maybe several experts in the field believe it to be true. But I don't think that it is that settled. I mean, what if a "nonlinear brain" is simply much better at producing external signs indicative of consciousness, and, in fact, some of those external signs include the *physical makeup* itself, which is more similar to (humanoid) brains than traditional computers?

And you know, I basically agree that Data probably has an "artificial brain," as you say it, but I don't know how you make this claim with certainty. Soong could also have simply called Data's brain an artificial brain for PR purposes. This is why the issue of simulation is important. It is my contention that one has to look at the outcomes produced by the "brain" or "computer" rather than the form that the "brain" or "computer" takes.

2) And even if Data definitely has an artificial brain which meets these criteria, I think it's still, as Peter G. says, not at all a settled issue that a sufficiently complex program (recursive, he emphasizes, but I will be a little more general) would not be conscious. The reason I emphasize whether or not it is important that a "nonlinear computer" could *simulate* consciousness is that I am going under the assumption that it is impossible to conclusively prove consciousness, UNLESS one is the conscious entity.

Line up a human, Data, the Doctor, and, say, Odo; some sufficiently non-humanoid, non-"artificial" life form which displays the *external* traits of sentience -- ability to learn and adapt, an ability to change and exhibit new bheaviours, and perhaps the ability to state that it is, indeed, alive. Which is conscious? You would argue that the human, Data and Odo are and the EMH is not. I would argue that it is impossible to be sure; EVEN THE HUMAN can only be verified to be conscious because he is sufficiently similar to me. I am not making this egocentric idly; what I mean is that it is impossible for me to be sure that the human is not a sufficiently advanced automaton, perhaps created by nature, perhaps otherwise. I don't actually know that I have free will, even; I know with certainty that I experience the thing which I define as "consciousness," and because of the extreme level of similarity of other human beings to me, I must reasonably assume that they have the same trait. This assumption then can reasonably be carried out to other humanoid life forms, which in terms of modern biological classification would even be the same *species* (since interbreeding between humans and Klingons, Romulans, Vulcans, Betazoids, Ocampa etc. are possible and in most of those cases we also see that their offspring can reproduce, as are interbreeding between Cardassians and Bajorans, though I can't think of Cardassian-human or Bajoran-human offhand). But then with Data, the EMH and Odo we are left with beings which are completely different in construction and origin. How would we conclude that they are conscious? Or *not* conscious?

Maybe we could identify some physical system which "explains" our consciousness. But even then we would be left with uncertainty whether other systems which are physically similar are actually conscious as well, but are simply reproducing the machinery but missing some unknown spark. We could, I suppose, claim that we know that Odo is (probably) conscious because there is no evidence that any conscious beings set out to create him, and to argue that it is unlikely for a being which displays traits consistent with consciousness to develop "by accident" without consciousness being there as well. However, of course, with Odo and other changelings, they are of course imitative; it is baked into their very nature that they imitate and recreate other beings. The changelings might simply be some sort of inorganic matter which for whatever reason imitates other, actually-alive beings, and then displayed external signs of being alive.

The reason I bring this all up is that my claim is that it is still anthrocentric to make the claim that the "nonlinear" versus "linear" distinction is all that matters, because it just moves the trait that defines consciousness from external signs of consciousness to the sort of physical, observable mechanisms that produce it. It is still making an argument based on what looks human, but instead of arguing about actions it is arguing about hardware.

So my claim is that it may be that consciousness develops when there is a sufficiently advanced system to reproduce all the external signs of consciousness; that the act of simulation is itself an act of synthesis. Understand me: I am not saying that, e.g., someone writing "I am alive" on a piece of paper is sufficient to reproduce life. But to be able to reproduce the full breadth of human behaviours (or, indeed, sufficiently complex animal, perhaps) may not be possible without producing consciousness along the way. This is perhaps an idiotic notion. It gets rid of some of the problems of anthrocentrism -- decentering away from the physical form and makeup of the thing which produces "apparent consciousness" -- but introduces another, in that the only way to define consciousnessmeans to define something which acts sufficiently *human* to be able to appear conscious. I have little to say to that charge except that I'm thinking about it.

3) Just as a small point, the Exocomps and the "Emergence" life form did not take humanoid form. The Exocomps are very close to the hypothetical "box on wheels" Maddox insisted would not be granted any rights, which is why I think that episode is (despite some significant flaws) an important follow-up to TMoaM. The "Emergence" life form does use human forms on the holodeck, but its eventual form is a funky-looking replicated series of tubes and connections.

4) As far as the suggestion that Peter and I were talking about Trek and you were talking about the real world, there is a lot to say, but I don't think it's so clear-cut. You have decided that what is being portrayed, in-universe, is that Data has an artificial brain and that the Voyager's computer is definitely not an artificial brain (and thus that the EMH, run on this and other similar computers, cannot be conscious). This still relies on evidence provided in universe and, *where evidence is lacking*, filling in the gaps with your own impressions of the intent. If it is not utterly conclusive that Data's brain is different from the ship's computer, you rely on the idea that the computer must be equivalent to modern "linear" computers, which is still an assumption about intent. There is nothing wrong with this, but I think that it just means that you are also "down in the muck" with the rest us of trying to interpret what Trek is actually saying, rather than purely talking about the real world. ;)
Set Bookmark
Peter G.
Mon, Jun 27, 2016, 1:11pm (UTC -5)
Re: TNG S2: The Measure of a Man

I'll just quote the relevant section, which includes quotes from both of us:

"PETER G.― "3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean?"

And there you have it: we are clearly having two different conversations. You are speaking Trek-speak. I am speaking of science. But the good thing is, the two can actually combine. I see this episode as an invitation, to all viewers, to further investigation of these elevated matters. I suggest you investigate, Peter. It’s much easier today than it was in 1989 ;)"

I must ask you what your credential is to speak to authoritatively on this subject. I would like to know if you have professional expertise in any of the following fields: computer engineering, neuroscience, mathematics, linguistics, information theory, fluid dynamics, quantum field theory, grand unified/string theory, or any other field somewhat close to these that gives you an understanding that a well-read layman would lack. If so, I'd like to know so I can understand your comments within that context and learn what you mean and where you're coming from. If not, I don't know how you can claim to know so much about "science" that you can reduce my comments to be merely references to Star Trek and not to real life. In point of fact I tend to try to discuss Star Trek logic on its own terms because the show is *fictional* and thus isn't actual reality. If you wanted to strictly discuss reality you'd have to tear the whole show apart. I choose instead to see how the show deals with its own premises, and I think William does too.

However, in terms of real world science and logic, I asked you a direct question about what you mean when you speak of "non-linear" data processing, which you claim Data and humanoids have and which mere machines don't. Your reply was to curtly state that we're having two different conversations. I would like to begin with at least one, since so far I'm not even sure what it is you're arguing. All of my objections above were meant to elucidate that I wasn't sure you were articulating a real argument about Data as opposed to making some conjecture that cannot really be discussed. However, maybe you have something specific in mind when you speak of non-linear processing and how Data's 'brain' in constructed, and if so I'd like to hear it.

And by the way, in modern computational theory it is by no means a discarded notion that a sufficiently complex series of recursively communicating circuits might form what we call consciousness.
Set Bookmark
Matthew Thomas
Mon, Jun 27, 2016, 1:00pm (UTC -5)
Re: Trailer: Star Trek Beyond

Another trailer.

Now with even more pop music that will date the movie in 10-15 years.

https://www.youtube.com/watch?v=Ep3A-yJ3P3Y
Set Bookmark
Peter G.
Mon, Jun 27, 2016, 12:52pm (UTC -5)
Re: DS9 S5: Children of Time

@ JD,

You suggest that by sacrificing 8,000 people to save the woman he loves Odo has gone too far and done something wildly extreme. Yes. The episode doesn't skirt around this, but rather is exactly about this. You say he's been written inconsistently with his previously established ethics, but what made you think his behavior was ever governed by Human ethics? At every possible occasion in the series Odo has made it clear that he'd prefer to rule the Promenade with an iron fist if given the chance, much like alternate-Odo does. He explicitly does not endorse or accept Human values. He does have a personal code to which he tries to stick unwaveringly, but whatever that code is, we are told multiple times in DS9 that it isn't a code based on justice and fairness.

The Female Changeling isn't the only one who informs Odo that his natural drive isn't towards justice but rather towards order, like the rest of his people. This is proven time and again, and made crystal clear in episodes like "Things Past." The Changelings, by their nature, will go to extremes to have their way regardless of the consequences. 'Future Odo' never went through what Odo did in the S6 arc, where he came face to face with his own nature and what it meant in terms of his feelings for Kira. He never realized the dangers inherent in his own nature, and how the extremity of his natural inclinations was something to be kept in check.

What you see as a writing flaw is in reality a definite design, meant to further a point about Odo, which is that he is not what you thought he was. That thought might be menacing, but it's even driven home further in S7 when Laas makes Odo see that his was masking his true nature for the benefit of solids. The only flaw with this episode as I see it isn't in the episode itself but rather in the lack of consequences in future episodes of what Odo did. They certainly address the consequences of what he said to Kira, but not of what he did to the colonists, and I think they missed out on an opportunity for good follow-up there.
Set Bookmark
Robert
Mon, Jun 27, 2016, 11:28am (UTC -5)
Re: DS9 S5: Children of Time

@JD - Because we're in a time travel episode and Odo is by far the most alien character we've EVER had as a regular in Star Trek you have to realize that your morality may be different than his. Yes, he caused 8,000 people to not exist, but how many people did he cause to exist as well.

Maybe Odo is just really god damned selfish and wants to leave this stupid planet and go home with Kira even if it means that it's not exactly him that's going home. I don't like the episode as much as I did when I first watched it, but Odo is a lifeform that's meant to literally exist in a permanent state of telepathic(?) connection with his entire race.

Dax was willing to lie to her old friend and kill Kira without anyone having any choice in the matter because of centuries of guilt. Well maybe centuries of isolation for a creature that is not supposed to be isolated really, really screwed him up. This story falls short of the kind answers I want, but I don't necessarily think the characters made the wrong choice.
Set Bookmark
Andy's Friend
Mon, Jun 27, 2016, 10:15am (UTC -5)
Re: TNG S2: The Measure of a Man

@Peter G. & William B

Like you, I also think very highly of this episode. It has a good script, with some very memorable lines, memorable acting by especially Patrick Stewart, and it is thought-provoking, and was perhaps even more so when originally aired, as it ask questions that we will undoubtedly have to ask ourselves one day, and touch the core of our own existence: what does it mean to exist?

But as I wrote above, and precisely because I consider the matter important, I feel that we must necessarily consider not only the in-universe data available, but also, real, hard science.

This means that while I may agree with you, in-universe, on a number of points, all that is trumped, in my opinion, by real science. Maddox, Moriarty, and Ira Graves are important: but they are so especially as glorious vehicles to ask important questions. And the answers, I find, must usually be sought outside the Trek lore.

This is not a criticism, quite the contrary. It is precisely why this is Star Trek at its very finest: as inspiration for further thought outside itself.

As such, consider Peter G. now:

PETER G.― "5) [...] if you're going to look strictly at their behavior and learning capacity side-by-side, the Doctor's much more closely resembles that of humanoids than Data's does. To be honest, my inclination is to ascribe this to lazy writing on the part of Voyager's writers in not taking his limitations nearly as seriously as the TNG writers did for Data [...]"

Very, very good point, Peter. But notice one word you wrote: "RESEMBLES". Resembles matters not, Peter: see the last three phrases of this comment. And also: why, but why, after such a good observation, do you immediately after it write

PETER G.― "5) [...] however what's done is done and we have to accept what was presented as a given."

I don’t think so, Peter. I love Star Trek, and especially TNG. But we must be able to love the forest, and cut down a few trees every now and then to improve the view.

Moving on:

PETER G.― "2) As William B mentioned, you state quite certainly that the Enterprise computer is distinctly different from Data's "brain", and that this mechanical difference is why Data can have consciousness and the computer can't. What is that difference?"

It is that never, ever, are we given the impression that the Enterprise computer is the equivalent of an *artificial brain* in the scientific sense, whereas it is extremely obvious from the onset that Data’s is one such creation.

PETER G.― "3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean?"

And there you have it: we are clearly having two different conversations. You are speaking Trek-speak. I am speaking of science. But the good thing is, the two can actually combine. I see this episode as an invitation, to all viewers, to further investigation of these elevated matters. I suggest you investigate, Peter. It’s much easier today than it was in 1989 ;)

As for William, you are absolutely right when you write that "the episode is of course not suddenly worthless if the characters within it make wrong or incomplete arguments." I wish in no way to detract from this wonderful episode, and I greatly appreciate what Snodgrass tried to do here, and indeed, mostly accomplished. And I could not possibly expect the writer back in 1988 to be an expert on artificial consciousness.

This is thus merely to say that I find this particular talk of ours a little difficult, because you both tend to use in-universe arguments much more than I do. You just wrote, for instance:

WILLIAM B―"However, I am not that certain that the brain being a physical entity is what is important for consciousness. Of the major holographic characters in the show... "

That is of course completely legitimate: to consider what Star Trek says, and not science―to judge Star Trek on its own terms. And I am frequently impressed by the amount of detail you seem to remember. Allright, then: what you then must do is investigate the coherence of the in-universe cases. Allow me three examples:

You first give an outstanding example of what I mean with an in-universe case:

WILLIAM B―"Further, we know from, e.g., The Schizoid Man, that Data's physical brain can support Graves' personality in a way that the Enterprise computer cannot (the memories are still "there" but the spark is gone)."

Precisely. But then, you write:

WILLIAM B―"Minuet is revealed to be a ploy by the Bynars, and whether she is actually conscious or not remains something of a mystery..."

No: it is only a mystery in the moment. The later example you give above retroactively affects "11001001," as it proves, even in-universe, that she cannot be conscious. Minuet is a program running on the Enterprise computer, just an even better program. But in your excellent wording: there is no spark.

And then you write:

WILLIAM B―"The holographic Leah actually is made to be self-aware..."

Maybe it is, and maybe it isn't: we must distinguish between various levels of self-awareness and consciousness. Many robotic devices on Earth today are beginning to exhibit the simplest traits of what to an outsider might appear as rudimentary self-awareness. But we must distinguish between self-awareness and mere artificial intelligence, or basic programming. If a robot vacuum-cleaner drives around a chair, you don't consider it sentient, do you? And if a robotic lawn-mower were programmed to say: "I'm with you every day, William. Every time you look at this engine, you're looking at me. Every time you touch it, it's me," you wouldn't call it self-aware, would you?

An example: toys are being programmed to react to stimuli, and can both say “Ouch!” and cry, etc. Questions:

1―Is the robotic doll that identifies a chair on its path, and walks around it, or even sits on it, self-aware?
2―Does it hurt the doll that says “Ouch!” if you drop it―even if you provide it with sensors able to measure specific force, and adjust the “Ouch!” to the force of impact?
3―Is the doll that cries if you don’t hug her for hours truly sad―even if it is programmed to cry louder the longer she isn’t hugged?
4―If the doll is allowed self-programming abilities, and alters its crying to sobbing, does that alter anything?
5―If the doll is programmed to say that it is a doll, manufactured at such and such place, at such and such date, and that its name now is whichever you have given it; and that it will take damage to its internal circuitry if you kick it, and beg you not to kick it as it will damage it, and hurt it, and begin to cry and sob, does that constitute any degree of self-awareness, or consciousness?
6―If you multiply the level of programming complexity a zillion times, does that change anything at all?

Would HAL dream? When that film was made, a considerable number of scientists would have answered yes. Not so today. The numbers of scientists who adhere to the thought that any sufficiently advanced computer program will result in artifical consciousness―a major group some fifty-sixty years ago, when Computational Speed was a deity to be worshipped and Man would have flying cars by the year 2000―have dwindled considerably since this episode was written. Paradigms have changed. Today, most say: computational speed, and global volume of operations, matters not. An infant child asleep has considerably less brain activity than a chess champion during a tournament match. That does not make it any less sentient.

Maybe you remember those days when this episode was written. The ordinary public, to which I suspect most Star Trek writers must be considered in this context, were marvelled then―or terrified―by IBM's Deep Thought (I live in Copenhagen, remember? I’ll never forget when it beat Bent Larsen in 1988), and the notion that a machine would, some day soon, beat the best chess players alive―regularly. And as I have written elsewhere, today any smart phone with the right app can beat the living daylights out of any international grandmaster any day of the week. But it isn't an inch closer to having gained consciousness, is it?

This divide, of intelligence vs consciousness, is extremely important. Today, we have researchers in artificial intelligence, and we have researchers in artificial consciousness. The divide promises―if it hasn’t already―to become as great as that between archaeologists and historians, or anthropologists and psychologists: slightly related fields, and yet, fundamentally different. The problem is, that most people aren't aware of this. Most people, unknowingly, are still in 1988. They conflate the terms, and still speak of irrelevant AI (see this thread!). They still, unknowingly, speak of Deep Thought only.

So my entire point is, this episode ends up being about Deep Thought. While the underlying, philosophical questions it asks, which science-fiction writers have asked for nearly a century by now, are sound, and elevate it, it is of course a child of its time: at the concrete level, it misses the point. It wishes to discuss the right, abstract questions, but doesn't know how to do it at the concrete level: it essentially reduces Data to Deep Thought.

But... I believe this was on purpose! As William points out, we had recently had the Ira Graves episode. In Manning & Beimling's story, the nature of Data's positronic brain was key. But to Snodgrass', it was detrimental. I am convinced that she was fully aware of the shortcomings of her story: I believe that she doesn't use Data's positronic brain as an argument, because it is a devastating one: it shreds Maddox apart. There would be no episode if she made use of it. And even worse: many viewers, in 1989 and even today, wouldn’t understand why. So she wisely ignores it: she refers Data’s positronic brain en passant only, but does not use the logical consequence of *a frakking artificial brain!* during the trial itself. Instead, she uses arguments of the Deep Thought kind viewers might be expected to understand in 1989―and today. And so we get this compelling drama. It’s on a much lower level of specific abstraction, but it can be enjoyed by many more.

And for once, I can accept that choice. Normally I call this manipulative writing: it's like 'forgetting' Superman has super-strength and allowing common thugs to kidnap him, because we have to get the story started. But in this case, in the name of the higher purpose, I gladly give it a pass.

The problem in all this, of course, is that the various episodes wish to captivate the audience. Therefore, they have to allow for the possibility of the impossible, and they have to leave things as mysteries: is Minuet, or Moriarty, sentient? Of course not. But just having Geordi laugh and say "Captain, that's just a program!" and kill the magic right then and there would kill the episodes. Still, we must not let good story-writing cloud our judgement. We must be able to enjoy a good story, while saying, "Wonderful! In real life, however... Captain, that's just a program!" Ironically, B'Elanna actually bluntly says just that of the EMH. But apparently, few of the fans take her seriously.

On another matter, William made an astute observation:

WILLIAM B―"It is worth noting that most forms of intelligence in TNG end up taking on physical form... "

Allow me to improve on that with my favourite theme: they end up taking *humanoid* form. The only reason people take Leah, Minuet, the EMH, Moriarty, and perhaps even Data seriously, is very simple: they look human. Make them a teddy-bear, a doll, and a non-humanoid robot, and these conversations wouldn't be happening.

I'll give you an example: ping-pong robots and violin-playing robots. Industrial robots have extraordinary motion control and path accuracy these days. But if playing a violin is no different than assembling a Toyota, playing a human requires a lot more than your standard cycle pattern deviation. Yet, pretty soon, robots will crush the best human ping-pong players as easily as chess engines today do chess players. And already today, ping-pong robots show good AI. Now, combine an advanced ping-pong cum violin-playing robot in a Data-body, and let it entertain the audience with some Vivaldi before destroying the entire Chinese Olympics team one by one. How many would start wondering: how long until it becomes alive? How long before it will dream? But show a standard ping-pong robot do the same, and they'll simply say: cool machine.

Now imagine a non-humanoid lifeform, leaving humans no possiblity of judging whether it was sentient or not. So you’re right, William: if you'll pardon the pun, form is of the essence.

...and by the way: notice that you did it again. "Intelligence" is irrelevant, William. Intelligence can be programmed, already today. Consciousness cannot: we can barely understand it.

Finally, a couple of last notes:

1. We may wish to simply turn off Leah, or Minuet―*and just do so.* The EMH, due to the importance of its function, cannot simply be ignored. But being forced to treat a program as self-aware does not make it self-aware.

2. WILLIAM B―"I guess one question is whether nonlinearity would actually be necessary to convincingly simulate human-level intelligence/insight"

Exactly: it wouldn't. But notice, just like Peter G. above, your choice of words: "SIMULATE." Who on Earth cares about simulations, Turing tests, and mere artificial intelligence, William? Do. Or do not. There is no simulate.
Set Bookmark
Yanks
Mon, Jun 27, 2016, 7:14am (UTC -5)
Re: TNG S2: The Measure of a Man

Good lord guys!!

:-)

the discussion is afoot!! lol
Set Bookmark
belowzero
Mon, Jun 27, 2016, 5:18am (UTC -5)
Re: VOY S3: The Chute

Good episode!

Did anyone else feel signs of claustrophobia when Harry crawled up that chute? Im not sure I would have made it up there with somebody so close behind me.
Set Bookmark
JD
Mon, Jun 27, 2016, 2:35am (UTC -5)
Re: DS9 S5: Children of Time

Okay, so, Odo is now a genocidal romantic after a few centuries? You know, you have to give television a lot of leeway when it comes to plot holes, and sci-fi has to be given enormous leeway with technical problems, and it's all okay, and understandable. But what would make the writers think it's okay to turn a well meaning, honorable, ethically upright (to an extreme) main character into someone that would erase 8,000 people to prolong the life (even though there's no way to guarantee she won't die of any fluke thing anyway) of one person because of his deep feelings for her, and all against her will, knowing fully well that she'd never, ever want him to do that? This is disgusting.

I don't normally sympathize with the going-against-character critique, because, people do that in real life every day anyway. But Odo Hitler over here is a little frigging extreme, don't you think? This is a premeditated mass murder from a logical minded person...because love! 0 - o

I don't understand the praise for this episode. And when it was assumed they were going to let the colony be wiped out, as a crew, they just had a good time together planting stuff? I cannot come to terms with this episode, I'm sorry. Great idea for the first half, but this doesn't make that much sense to me, the way they handled it. You say the episode doesn't cheat, but that out they use with Odo is the ultimate cheat, to me. It cheated the character, it cheated us, and it cheated, in essence, 8,000 people out of their lives, according to the logic of the episode, and everyone in the crew of their choice! Cheat, cheat, cheat, cheat, cheat, cheat, cheat, cheat. Cheat!
Set Bookmark
Dougie
Mon, Jun 27, 2016, 2:08am (UTC -5)
Re: TNG S1: Skin of Evil

I remember this episode above most others. While it's apparent defining moment was Yar's death, it was really Troi's emergence. We can debate if Sirtis took full advantage of the gift, same with Worf who also benefitted, but the counseling she did, starting with "Liar!" was some of her best acting.
Set Bookmark
Jonesy
Mon, Jun 27, 2016, 12:12am (UTC -5)
Re: DS9 S7: The Changing Face of Evil

Comparing the picture of Starfleet HQ blown to bits with the map of the SF Bay area overlaid with casualties Weyoun and Damar are viewing moments later, I think I found a mistake.
If Starfleet HQ is located in close proximity to the north side of the Golden Gate Bridge as the image suggests, it would be in Sausalito. But according to the Dominion, the vast, vast majority of the damage was in Oakland and Alameda, ie, East Bay. There are zero casualties on the map anywhere near the presumed location.
Voyage Home seems to corroborate the north bay location - after all, Chekov has apparantly never heard of Alameda - so what gives?
Set Bookmark
William B
Sun, Jun 26, 2016, 11:33pm (UTC -5)
Re: TNG S2: The Measure of a Man

I agree with Peter G.'s last comment. I would tend to say that I'd tend to view Data and the EMH as probably sentient, rather than probably not sentient, because I think that a system sufficiently sophisticated to simulate "human-level" (for lack of a better term) sentience may have developed sentience as a consequence of that process. Either way though the evidence mostly suggests to me that if one is sentient, they probably both are. If Data's brain is different from the code which runs the EMH, this is mostly not emphasized by TNG/Voyager. (I also agree that the Doctor's seeming to be less limited than Data is maybe an artifact of the Voyager writers not putting as much effort in. It does to some degree support the idea put forward by Lore that Data was deliberately built with limitations so as to prevent him from upsetting the locals too much -- the Doctor veers more quickly and readily toward narcissism than Data does, perhaps because Data is acutely aware of his limitations.)

If I had to describe an overall arc in TNG and Voyager, it would be that TNG introduces the possibility, via Moriarty and "Emergence," that sentience can be developed within the computer, but generally Moriarty is treated as a fluke which is too difficult to deal with. I actually think that they don't exactly conclude Moriarty isn't a life form so much as try to respect his one apparent wish -- to be able to get off the holodeck -- and then put that on the backburner presumably handing the problem off to the Federation's best experts; why the holo-emitter takes until the 29th century to be built I do not know, but that's the way it is. In "Ship in the Bottle," they let Moriarty live out his life in a simulation as a somewhat generous solution to the fact that he was holding their ship hostage to achieving his, again, apparently impossible request (to leave the holodeck). In any case, in Voyager the EMH is slowly granted rights within the crew, and finally the crew mostly seem to view him as a person, and by season seven the question of whether the rights granted to the EMH within the specifics of Voyager's isolated system should be expanded outward. The bias toward hardware over software -- Data and the Exocomps vs. holographic beings -- seems to me to be something that the TNG-Voyager narrative implies is not really based on a fundamental difference between Data and the EMH, but a difference in the biases of the physical life forms, who can more readily accept another *corporeal* conscious being, even if mechanical, as being sentient, rather than a projection whose "consciousness" is located in the computer. I think that narrative implies that the mighty Federation still has a long way to go before coming to a fair ethics of artificial beings, which to me seems fine -- in human history, rights were expanded in a scattershot manner in many cases.

I do think too that the relative ease of Data's being granted rights has a lot to do with what this episode is partly about -- precedent. By indicating the dangers of denying Data rights if he is actually sentient, Picard avoids the Federation becoming complicit in the exploitation of an entire sentient race of Datas, or, if Data is not sentient, the Federation loses out on an army of high-quality androids. Either way, once the decision is made, while it may be tempting to overturn the decision if Data becomes dangerous (and it is implied in The Offspring and Clues that the narrowness of Louvois' ruling means that Data might be denied right to "procreate" or might be disassembled if he refuses orders), if he doesn't it is in the Federation's interests to maintain their own ethical narrative. Because a whole slew of EMHs and other presumably similarly advanced holographic programs were developed by the Federation, granting that they are sentient "now" (i.e., by the time Voyager is on) would mean admitting to complicity in mass exploitation which cannot be undone, only stopped. In miniature, we see this in Latent Image, where Janeway et al.'s complicity in wiping the Doctor's memory is part of what makes it especially difficult for them to change their minds about the Doctor's rights.

I do think that while Voyager seems to be pushing that sentience is possible in holograms, it still does not generally discuss whether the ship's computer *itself* could be sentient life, which is a big question. I guess Alice suggests that a ship's computer could be alive if it houses a demon ghost thing. It might also be that a ship's computer is so far from any life form that is classified as a life form that it is unknown how to evaluate what its "wants" would be, or what an ethical treatment of it would even look like. The same would probably actually apply to some forms of "new life" discovered which would not have "wants and needs" in a way that would be recognizable to humanoids, though I can't think of such examples in Trek.
Set Bookmark
Peter G.
Sun, Jun 26, 2016, 10:36pm (UTC -5)
Re: TNG S2: The Measure of a Man

@ Andy's Friend

I'm just not sure how you come to the conclusion definitively that Data's 'consciousness' has emergent properties just like a Human consciousness does, and that this is due to his physical brain. There are several objections to stating this as a fact, although I grant it is certainly plausible. I'll just name the objections numerally, not having a better method.

1) I don't see how you can define Data's consciousness as having similar properties to that of Humans since you at no point state what the qualities are of Human consciousness. What does it even mean to say they are conscious, in your paradigm, other than to say they say they feel they are conscious? Is there a specific and definitive physical characteristic that can be pinpointed? Because if, as you say, that quality is some "ineffable" something then we're left in the dust in terms of demonstrating what does or does not possess this quality. We could discuss whether a simulation creates the appearance of it, but that aesthetic comparison is the best we can do. As far as I can tell this is William B's main argument in favor of the episode having contributed something significant to the discussion.

2) As William B mentioned, you state quite certainly that the Enterprise computer is distinctly different from Data's "brain", and that this mechanical difference is why Data can have consciousness and the computer can't. What is that difference? If as you, yourself, say we don't know anything about what a positronic net is, then how can you say it's fundamentally different from the computer's design? At best we can refer to Asimov when discussing this, and the only thing we know from him is that for some reason a computer that works with positrons instead of electrons is more efficient. It is, however, still a basic electrical system, using a positive charge instead of a negative one, and requiring magnetic fields to prevent the positrons from annihilating. Maybe the field causes the circuits to act as superconductors? Who knows. Asimov never said anything about a positronic brain using fundamentally unique engineering mechanisms as far as I recall from his books. This leads to objection 3, which is a corollary of 2.

3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean? I could guess what it means, but you seem to state it as a fact, which makes me wonder what facts you're basing your statement on. It's certainly possible you're right, but how can you *know* you're right? Frank Herbert himself was convinced that Humans employ non-linear processing capabilities not replicatable by machine binary processing. Then again, he couldn't have known there was such a thing as quantum computing. And I'm not even convinced that quantum computing is what you might mean when you (and he) discuss "non-linear" processing, which I can we can also call non-binary computing. Even quantum computing, to the extent that I understand, seems to be linear processing in multiple parallel so as to exponential increase processing power. I don't know that the processing is necessarily done in a fundamentally different manner, however. It's not binary, but still appears to be linear. If there is a different kind of processing even than this (that maybe we possess) I don't see that we can even imagine what this is yet, no less ascribe it specifically to Data's circuitry.

4) Since it's never revealed that Data's programming and memory can't be transferred to another identical body, I don't see how you can be so sure that the EMH's easy movement from system to system makes for a basic difference between him and Data. If the Doctor is contained in more primitive computers than Data is then obviously he'd be easier to transfer around, but the distinction then would only be that Data's technology isn't well understood during TNG and thus can't be recreated yet. But this engineering obstacle is only temporary and once Datas could be constructed at will I don't see how you could be sure his personality couldn't be transferred just as easily as that of the EMH. Once Starships have positronic main computers there would seemingly be no difference between them at all, and likewise once android technology is more advanced there seems to be no good reason why the EMH couldn't be transferred into a positronic android body (if someone were to have to bad taste to want to do this :p ). The differences you state in this sense seem to me to be superficial and not really related to any fundamental difference in consciousness between the two of them. They're each contained in a computer system, one being more advanced than the other, but otherwise they are both programs run inside a mechanical housing. Data, like the Doctor, is, quite literally, Data. His naming is hardly a mere reference to the fact that he processes data quickly, I think, since the ship's computer does that too. Now that I think of it, his name somewhat reminds me of Odo's, where each is meant to describe what humanoids thought of their nature when they found them; one as an unknown specimen, and one as data.

5) If anything Data is even more constrained than the Doctor in terms of mannerism and behavior. He cannot use contractions, cannot emulate even basic Human behaviors and mannerisms, and cannot make errors intentionally. By contrast, the Doctor seems to learn much more quickly and is more adaptable to the needs of the crew in their situation. Both he and Data adopt hobbies, but while Data's are merely imitative in their implementation, the Doctor appears to somehow come up with his own schtick and tastes that are not obvious references to famous singers (as Data merely copies a violinist of choice each concert) or to specific instances in his data core. He really does, at the very least, composite data to produce a unique result, as compared to Data, who can't determine a way to do this that is not completely arbitrary. I'm not trying to make a case for the Doctor's sentience, but if you're going to look strictly at their behavior and learning capacity side-by-side, the Doctor's much more closely resembles that of humanoids than Data's does. To be honest, my inclination is to ascribe this to lazy writing on the part of Voyager's writers in not taking his limitations nearly as seriously as the TNG writers did for Data, however what's done is done and we have to accept what was presented as a given. I would say that, at the least, if Data is sentient then so is the Doctor, although my preference would be to suggest that neither is. But William B's point is valid, that a hunch that this is so shouldn't be confused with the certainty needed to withhold basic rights from Data, which is all the court case was deciding. Then again, I see another weakness with the episode as being that the same argument could be turned on its head, with the position being that by default no 'structure' is automatically assigned 'sentient rights' until proven it is sentient. The Federation doesn't, after all, give rights to rocks and shuttlecraft *just in case* they're sentient. In fact, however, their policy (at least as enacted by Picard), appears to be closer to granting rights to any entity demonstrating intelligence at all, whether that's Exocomps, energy beings, tritium-consuming entities, the Calamarain, or any other species that demonstrates the use of logic and perhaps the ability to communicate. Picard's criterion never seems to be sentience, but rather intelligence, and therefore we are never offered an explanation of how this applies to artificial life forms in particular, since some of them (like Exocomps) are treated as having rights based on having "wants", while others, like Moriarty, are not discussed as having innate rights, despite Picard's generous attempts anyhow to preserve his existence. Basically we have no consistent Star Trek position on artificial life forms/programs. Heck, we are even given an inkling Wesley's nanites could develop an intelligence of some kind, even though they clearly have no central processing net such as Data's, and even if they do it wouldn't be as sophisticated. So why is an Exocomp to be afforded rights, but not the doctor, when the Voyager's computer is likely vastly more advanced than the Exocomps were? My main point here is that there is no conclusive evidence given by Star Trek that broadly points to Data as having some unique characteristic that these other technological/artificial beings didn't have, or contrariwise, that if they have it the Doctor specifically doesn't. We just don't have enough information to make a determination about this.

I guess that's enough for now, since I'm even beginning to forget if there are other points to respond to and I haven't the energy right now to reread the thread.
Set Bookmark
Nolan
Sun, Jun 26, 2016, 9:54pm (UTC -5)
Re: Star Trek II: The Wrath of Khan

Everyone and their Grandma always points to the "Khaaaan" yell as THE definative hammy shatner/Kirk delivery, but no one seems to get that he only did that in response to Khan telling him that he'll go back to the dead in space Enterprise and blow it up, leaving him stranded. Which Kirk KNOWS isn't going to happen cause the Enterprise is fine and'll be right around the corner. The KHAAAAAAN!" was purely for the Khan's benefit, selling the ruse of the crew to him.
Set Bookmark
Ivanov
Sun, Jun 26, 2016, 9:36pm (UTC -5)
Re: TNG S3: The High Ground

For those wondering why the Rutians wouldn't grant the Ansatans In independence I have a theory.

Considering that they already have a long trade agreement Federation The Rutians were planning on eventually applying for membership? And the Federation has that arbitrary rule where a planet needs to have a single united government in order to be accepted. Its obvious the Rutians and Ansatans wouldn't get along so the main governments only option was to hold on to the other continent no matter what.

The last part with the kid was cheesy the music certainly doesn't help it.

I like this episode even if Finn conveniently knew about George Washington and the United states.(Isn't Dr Crusher Scottish?) 3 Stars
Set Bookmark
Skeptical
Sun, Jun 26, 2016, 7:24pm (UTC -5)
Re: Star Trek II: The Wrath of Khan

With all these comments, only Jammer has mentioned the reason I think this movie is so great: Ricardo Montalban. His performance here is absolutely fantastic, by far the best performance of any guest actor on Star Trek ever. Every scene he's in, his presence is so commanding you can't help but focus on him. Khan is simply larger than life, not just a person but a villain, a force, an ominous presence that simply cannot be ignored. He makes even Kirk look smaller in comparison.

I heard an interview with Montalban once talking about this role in the movie. He said that the director allowed him to go almost, but not quite, over the top in his performance. And you can see that with every line he says. He never quite reaches the level of ham (unlike Shatner with his infamous "Khaaannn!" shout), so he always seems sincere and always feels like a real person. Yet, despite that, his presence is calculated, showman-like, theatrical.

I mean, consider this line: "He tasks me... he tasks me and I shall have him. I'll chase him round the moons of Nibia and around Antares' Maelstrom and around Perdition's flames before I give him up." That line could have been a huge dud; it sounds kinda cheesy when typed out like this (and yes, I know it's a classical reference). Yet, when Khan says it, it is chilling, it is threatening, it is brilliant.

A part of me is sad that Khan wasn't a TNG-era villain. I'd have paid good money for a 2-hour movie of nothing more than Patrick Stewart and Ricardo Montalban quoting Moby Dick at each other.

But I digress, back to Khan. Like I said, Montalban played him as a theatrical, larger than life villain. Not only was it a brilliant performance that was a joy to watch, but it really makes sense for the character to be played that way. Despite the title, wrath is not the deadly sin that Khan is guilty of, it's pride. It's sheer arrogance in his knowledge that he was genetically superior to the rest of humanity. He is stronger, he is smarter. And in his mind, that makes him flat out better, at everything.

So he spends this movie quoting Moby Dick, putting himself in the position of Ahab. Now, Khan isn't an idiot, he knows Ahab is not supposed to be a hero, not supposed to be someone to identify with. Khan knows that the point of the book was that Ahab's quest for revenge resulted in poor decisions that led to his own doom. So why is he identifying with Ahab here? Why is he putting himself in the position of being the idiot making horrible decisions that will lead to his own death? Simple, he thinks he's better than Ahab. And part of that means he can succeed in his insane vengeance against Kirk where Ahab fails.

This whole plot is basically an adrenaline high for Khan, a way of showing off. Vengeance may be bad for mere mortals, but not for the brilliance that is Khan. Khan can make irrational decisions and come away with victory. Khan can gloat over the hero without it backfiring on him. Khan can make all the same choices that Ahab made, and come out smelling like a rose. Because he is superior to Ahab. He is superior to Kirk. He is so freaking smart that he can outsmart Kirk even when he's being stupid.

That's why he's quoting Moby Dick. That's why he's being theatrical and larger than life. He's showing off. To Kirk, to his crew, to himself. He doesn't just want to rule the galaxy or even defeat Kirk. He wants to rub Kirk's nose in it. He wants to play life on the difficult setting, putting artificial difficulties into his scheme just so that he can show how easily he can overcome them. And he's enjoying every moment of it.

If he had simply listened to Joachim at every step, he would be free to terrorize the galaxy with his ultimate weapon. But he couldn't resist showing off. He couldn't resist toying with and humiliating Kirk. And that was why, in the end, he failed. Because he failed to recognize his own limitations. He failed to check his pride. And it's a darn good thing he failed to do that, because it made this movie so much better. It's why there's no villain in the Star Trek pantheon that can live up to Montalban's Khan. Not Kruge, not Chang, not the Borg Queen (though they may have their style points), and certainly not Sybok or clonePicard or CumberKhan.

Above of else, a villain should be interesting to watch. And you simply cannot stop watching Khan, from the moment he slowly peels off his desert protective clothing to his last dying gasps.
Next ►Page 1 of 1,312
▲Top of Page | Menu | Copyright © 1994-2016 Jamahl Epsicokhan. All rights reserved. Unauthorized duplication or distribution of any content is prohibited. This site is an independent publication and is not affiliated with or authorized by any entity or company referenced herein. See site policies.