Comment Stream

Search and bookmark options Close
Search for:
Search by:
Clear bookmark | How bookmarks work
Note: Bookmarks are ignored for all search results

Total Found: 152 (Showing 1-25)

Next ►Page 1 of 7
Set Bookmark
Andy's Friend
Fri, Feb 21, 2020, 4:58am (UTC -6)
Re: PIC S1: Stardust City Rag


"I personally prefer the way the new shows are geared towards adult viewers. Adults swear, bat'leths stab, people bleed, and the preachy utopia of a well-run Federation starship in peacetime is revealed (…)"

i. I personally prefer the way Chinese Communism is now geared towards market forces compared to the 1960s. I just don't call it Communism anymore.

ii. The thing is, I never liked Communism to begin with. So I have no problem with applauding the steps that have been and are still being taken towards a more market-oriented, private venture-friendly economy in China despite the authoritarian nature of the regime. But as I recognise that the ideology of the Communist Party of China is no longer Communist, I call it something else.

iii. This is the crux of the matter. From what I read, Star Trek: Picard is as much Star Trek as China under Xi Jinping is Communist. China is still authoritarian, sure. But so was Pinochet's Chile. Communist it ain't, however.

iv. You apparently never accepted the "preachy utopia" premise of Star Trek. Fine. You prefer cynicism. Fine. There are many bleak series in sci-fi trappings to watch. You enjoy this new series. Fine. All fine. But please, don't insist on calling this Trek.

v. The feeling I have is not that some fans abhor the cynicism in some modern television productions. I believe that a great many Star Trek fans also liked BSG, for example. But they were able to see that the fundamental assumptions regarding society and human nature of BSG was another than Star Trek's.

vi. The feeling I have is therefore simply that many cannot understand how some will handwave away the evident discontinuity in psychology and ethos depicted in the new series vis-à-vis its immediate predecessors in-universe, TNG-VOY.

vii. It is perfectly valid to criticise the new series (Discovery and Picard) as not being Trek, and it is perfectly valid to like the new series. What is not valid is to like them *as Star Trek*, due to said discontinuity of psychology and ethos.

viii. This, then, is the problem. Some fans who never truly accepted the fundamental "utopian" premise of Star Trek insist that this *is* the "realistic" depicture of humanity, also in the 24th century. They grasp at straws, they conjure every imaginable line of script despite massive evidence to the contrary, because they *want to believe* that this cynical vision could be Star Trek.

ix. Perhaps it could. But not in the span of twenty years, from the late 2370s as in TNG-VOY to the late 2390s as depicted now.

x. Paradoxically, this goes precisely against the *realism* that so many argue that NuTrek represents. It is simply not realistic to expect that people born in the 2330s-2360s -- in other words, the *adults* in the late 2390s, not only the people in positions of power but the entire living, breathing, adult human tissue of the Federation -- would revert so much from their "evolved", "utopian" psychologies as depicted in the 2370s to what is depicted now.

xi. You want a "gritty", bleak, pessimistic, cynical vision of Star Trek? Fine. But place it in the 2440s, and give us *plausible causality* that may explain how, in the course of two or three generations, things changed. Don't have us believe that it happened in a mere twenty years, *over nothing*, and defend such an untenable proposition.

xii. See viii.

xiii. See i.
Set Bookmark
Andy's Friend
Sat, Feb 15, 2020, 5:05am (UTC -6)
Re: PIC S1: Absolute Candor

@ Melota, @Yanks

Melota — 'We saw plenty of very morally dubious decisions from Starfleet Admirals throughout TNG.‘
Yanks — ‘Yeah that is kind of the running joke of Star Trek. The moment you become an admiral you want to do nasty stuff.‘

I wouldn’t say plenty, and Yanks is essentially right: it was more a running joke than anything serious. However—and this is a huge however:

Melota — ‘We saw how quickly the McCarthyite witch hunt took hold in The Drumhead.‘

No, we certainly did not. What we saw was how quickly unjust accusations were revealed to be just that. And please note that those unjust accusations were made by a deranged admiral.

‘The Drumhead’ is likely one of the most abused episodes of TNG. Note that I write ‘abused‘, which it certainly is. But it is possibly also one of the most *misunderstood* of TNG episodes.

‘The Drumhead’ is a moral tale, and it is a cautionary tale, yes. Picard rightly warns of the dangers of succumbing to paranoia, yes. But it does not depict a Federation on the verge of becoming paranoid, not even a Starfleet prone to paranoia. Quite the contrary. The initial suspicions against the Vulcan officer are relatively innocuous. And the moment it becomes clear that Admiral Satie’s accusations are but paranoid delusion, she is cut short. She is cut short by Picard, in one of his ‘wonderful little speeches’, yes. That happens because Picard is the main star of the series. But it could have been anyone, and it would have been were Picard not said main star, and able to deliver such ‘wonderful little speeches‘.

The point is thus not that Picard is some sort of ‘More Starfleet than Starfleet’ super-captain, able to perceive what no-one else does, denounce what no-one else can, and uphold higher values than both the organisation and the society he serves. The point is thus not that Starfleet, and the Federation, are in fact baser than Picard is. No. He is simply the mouthpiece for the ethos of the Federation, entitled to the more substantive lines and grandiloquent speeches as per Mr Stewart’s contract and talent.

Note how that episode cleverly uses Worf, a Klingon with extreme sense of order and naturally biased against Romulans, to service the plot. Had it not been for Worf, another character, with a backstory similar to O’Brien’s re the Cardassians, would have had to be written to serve Worf’s function. For in TNG, we all know that around 2370, more ordinary Starfleet officers would balk at Satie’s propositions.

Note also how the episode must make Admiral Satie psychologically unhinged in order to even make the story believable: for in TNG, we all know that around 2370, only an unbalanced person would make the kind of false accusations she makes.

In short, ‘The Drumhead’ is as fine an espousal of Star Trek 24th century Federation ethos as any. It does not, contrary to what is so often claimed, show how quickly Starfleet or the Federation might succumb to racism, xenophobia, isolationism, and other paranoias. It does quite the opposite. It shows us how quickly any Starfleet officer *with sufficient insight*—in this case, quite naturally, our main star—would unmask such paranoia. All while, quite correctly, warning *the audience in the 20th century* against such paranoia.

As always, we must know to differentiate, and to recognise when the ‘big speech’ is being directed primarily at the in-universe characters (say, ‘The First Duty’), and when it is being directed primarily at the audience as commentary, as here. TNG generally stroke a balance between the two deliveries, and did so masterfully. Unfortunately, in the case of ‘The Drumhead’, this is not understood by most fans. Picard is not speaking to his fellow officers: he is speaking to us.
Set Bookmark
Andy's Friend
Fri, Feb 14, 2020, 8:36am (UTC -6)
Re: PIC S1: Absolute Candor

@A A Roi

‘Maybe watch The Undiscovered Country regarding a similar situation with the Klingons. Maybe watch Balance of Terror re: racism towards Romulans/Vulcans among humans in the Federation, or Dr. McCoy's casual racism versus Vulcans throughout TOS. Maybe watch the episodes in the Berman era where O'Brien refers to Cardassians as 'Spoonheads'. Maybe watch Measure of a Man and Offspring to see how Starfleet feels about Androids. Maybe watch how the Federation Council voted to withold the cure for the virus Section 31 from the Founders in the Dominion War.’

1. One shouldn’t use TOS (2260s) as benchmark for the Starfleet and Federation of the 2390s, but TNG-VOY (2360s-2370s).

2. a) Please tell me in which episode O’Brien uses the term ‘spoon head’, I seem to have forgotten.

2. b) You seem to have missed the point. O’Brien’s ‘Cardies’ shows his resentment for what he experienced at Setlik III, yes. But he uses that term in DS9, not TNG: different writers, and a different ethos. And this notwithstanding, it shows how restrained that resentment of a Starfleet officer is even in DS9-Trek. He didn’t call them ‘F*cking spoon heads’, did he?

3. a) re ‘Measure of a Man’. Again, you seem to have missed the point. Maddox and Starfleet simply haven’t grasped the nature of Data. To them, *in this episode*, Data is essentially a robot, a glorified toaster: they have not understood him to possess artificial consciousness. Granted, *this a conceit*: he would never have graduated from Starfleet Academy without making the Academy aware of this, and by inherence, Starfleet. But it is a conceit we must accept in order for the episode to play out its story, which is a glorious one. And again, *as nearly always in TNG*, we see *dialogue, even if adversarial as here, bring about enlightenment*, and change of heart. By the end of the episode, Maddox no longer regards Data as a mere advanced robot, does he?

3. b) re ‘The Offspring’. I’ll partly grant you this one. After ‘The Measure of a Man’, Starfleet should, perhaps, be ready to let Data evaluate and educate his own creation. Starfleet does appear to be too adversarial here. But again, this is one of those conceits we must accept in order to tell a story, which is what TNG did in virtually every episode. The episode itself is sound. Like ‘The Measure of a Man’, it is a story that uses advanced, futuristic science in order to tell what is essentially a moral tale. Some would even call it science-fiction.

4. I’ll grant you this one, on two counts. The cure for the virus affecting the Founders *should* have been given them by the Federation to prove the benign nature of the Federation, and thus bring about an end to the Dominion war; but of course, the virus should never have been developed by the Federation in the first place. More idealistic writers might perhaps have thought of having the virus evolve independently and affect Odo also, and let Federation doctors attempting to cure him develop a cure to all the Founders as well. Roddenberry would surely have preferred such an ending, as it would showcase the best values humanity has to offer. Roddenberry-era Star Trek was never so much about realism as it was about idealism.

Point 4. above is one of the (many) reasons why so many TNG fans have a problem with DS9, even if we readily admit that it was a fine series. It is a symptom of something deeper, with problems concerning both its serialisation—it tells much fewer stories—and its core ethos: if the quantity of stories is lower, their quality, in ethical terms, is often lower, too. DS9 is less morally satisfying (or, if you prefer, more morally challenging) than TNG. And whether one prefers TNG or DS9 is, essentially, a moral question.

A better example of the Federation and Starfleet ethos around the same time as the end of the Dominion war in DS9 is VOY’s seventh-season ‘The Void’, which takes place in the year 2377. VOY as we know is plagued by very uneven writing, and Janeway proves to be a fascinating study of a captain broken by circumstances. I find many of Janeway’s decisions provocative, and some, as in ‘Tuvix’, outright disgusting. Perhaps it is true that it is when we are most tested that our true colours show, and the extreme circumstances in ‘The Void’ certainly test our crew, and captain. It feels reassuring—or, if you prefer, morally satisfying—that even after being stranded in the Delta Quadrant for seven years, that broken, guilt-ridden captain is able to display such fine Federation values as in ‘The Void’. Some would even call it humanism.

Humanism is also displayed by The Doctor in the episode ‘Critical Care’ that same season.The Doctor is ultimately the end result of Federation programming, and exhibits that same Federation ethos. I am reminded of both him in ‘Living Witness’, and a younger Jean-Luc Picard, in ‘Emergence’:

‘The intelligence that was formed on the Enterprise didn’t just come out of the ship’s systems. It came from us. From our mission records, personal logs, holodeck programs, our fantasies. Now, if our experiences with the Enterprise have been honourable, can’t we trust that the sum of those experiences will be the same?’

A mere twenty-two years separate the events in ‘Critical Care’ and ‘The Void’ from those in this new series. Young men and women then should now be people in their prime. Do tell me, A A Roi: what happened to humanity?
Set Bookmark
Andy's Friend
Fri, Jan 31, 2020, 8:10pm (UTC -6)
Re: PIC S1: Maps and Legends

@ Bold Helmsman

"On the matter of cursing on Trek, I've never looked at it as them taking license to be immature. People have been spicing their language with curses since time immemorial, especially when they get emotional. It's simply part of human nature, even evolved humans."

That is only partially true. The more well-bred you are and the more education you have received, the better manners you will have and the less you will curse.

Take a thousand Ph.Ds. and a thousand [insert menial worker of choice] in [insert country of choice]: despite national differences as regards what levels of profanity are tolerated, the thousand Ph.Ds. will, on average, curse less than the menial workers *in any country*. They will curse less in normal daily life, and, if exposed to the same levels of stress, they will curse less under stress, too.

I have not watched and will not watch Picard; but a 24th century Starfleet Admiral using that kind of language, especially to *Picard's* face, is like having Grace Kelly using it to James Stewart's face. Unimaginable. Grace Kelly wouldn't have done it in 1955, and she wouldn't have done it in 1975, either.

Society has to deal with a huge inertia when it comes to the human psyche. People don't change much in the course of their lives, and certainly not in twenty years: generations change from one generation to another. This applies especially to good manners. Good manners is part of your identity, part of what gives you your dignity. You may lose your fortune, your friends, your family even: good manners is one of the last things to die.

The people who had very good manners aged thirty-five in 1955 still had very good manners aged fifty-five in 1975: it was their kids who were behaving differently. This is true as far back as you wish to go in recorded history. A well-bred late Victorian lady and gentleman -- say, Professor Moriarty and his lady friend in 'Ship in a Bottle' -- wouldn't curse and swear even after the horrors of the Great War.

On TNG's Earth, all men and women were modern-day ladies and gentlemen, so to speak, and certainly all Starfleet Academy graduates, infused with the Federation ethos: if not wise, at least educated; if not elegant, at least sophisticated; if not noble, at least gracious. (And, the reprimand given with wit and sagacity is much more effective than profanity.)

In other words, it seems like an absurd societal development has occurred from the year 2367 to the year 2397. People who grew up and were brought up to be gracious have forgotten even their manners. I don't, I can't believe it. But then again, based on what I read here and elsewhere, there are many things I can't believe about modern 'Star Trek'.
Set Bookmark
Andy's Friend
Mon, Nov 18, 2019, 12:17pm (UTC -6)
Re: TNG S2: Q Who

You're quite right, Jason, but let's not split hairs: you remember the episode as well as I do, and what matters is not the above, but *how* Picard delivers this line:

PICARD: Absolutely. That's why we are out here.

That is what causes Q's response: Picard's nonchalant 'absolute' certainty. For it is (to be blunt) sheer nonsense: Starfleet could of course never be 'ready to encounter' all things, and Picard should have known this. So in the end, while I appreciate the difference between being 'prepared' and being 'ready' that you mention, it is largely academic, and beside the point. Other than that, you are obviously right.
Set Bookmark
Andy's Friend
Mon, Nov 18, 2019, 11:33am (UTC -6)
Re: TNG S2: Q Who

@George Monet

You have to look at it from the perspective of classic storytelling, and forget about such silly modern notions as 'plot holes'.

Take for example Picard's initial assertion that Starfleet is prepared for whatever is out there. This is admittedly out of character for Picard and outright silly. But it is nothing but an instance of Classical hamartia, the hero's 'tragic flaw', moving the plot forward and leading to catharsis as he is humbled by Q and learns his lesson: "I need you!"

We know Picard to be better than this. And therein lies the greatness of this episode. Facing Q and letting his animosity toward that entity get the better of him, Picard, our hero, errs. And it costs him eighteen of his crew to learn that. In other words, his over-confident initial stance is not a 'plot hole', it is a time-honoured plot device.

Star Trek is rife with such classic storytelling devices, which we must know to recognise in order to fully appreciate many of the stories told. Star Trek, more often than not, is not about 'realism': it is about archetypes, classic tropes, and ancient lessons. This was understood thirty years ago when this episode aired. The problem is that viewers these days have an exaggerated appetite for realism, all while they seem to have forgotten all about classic dramaturgy and apparently only know how to shout 'plot hole!'
Set Bookmark
Andy's Friend
Thu, Aug 23, 2018, 6:47am (UTC -6)
Re: TNG S4: Galaxy's Child

@Chrome, Peter G., William B:

I have never really seen what the problem is in this episode, or its predecessor (‘Booby Trap’). It is a story as human as it gets. Are people really so detached from their humanity nowadays? Are we really so fast becoming robots?

Meet Peter, Paul, and Mary. Mary has just met Peter. Peter then tells Mary about his good friend, Paul. Over the next few days or weeks, as Peter and Mary keep meeting and having nice conversations about themselves and their lives, Paul keeps coming up. In the end, after several long chats, Mary feels that she has a pretty good idea of who Paul is. But when she finally meets him, she finds that, although everything Peter told her about him was true, Paul doesn’t correspond, perhaps not at all, to the idea she had made of him.

This is something any minimally adult person will have experienced in life. Nothing new here so far.

Now imagine that Peter is absent at Mary’s first meeting with Paul. This immediately creates a slightly awkward situation, for Mary will have some or perhaps detailed knowledge of some events in Peter’s life, and Peter has no idea of which. What does she know about him? What has his friend told her? Has he exaggerated? Has he been truthful? Has he told her any truly intimate details? Familial matters? Matters of life and death?

Again, none of this should be unknown to any adult: if it is, he or she has been watching too much television, and interacting too little with other people. This is what happens when people meet, and talk. ‘I have a friend who…’ … ‘My cousin is…’ and so on, and so forth. And any sane adult knows which kind of details of a personal nature are innocent to share with a new acquaintance, and which are not. ‘My friend Paul likes wasps’ is fairly innocent. ‘Paul has weird sexual fantasies of being a giant wasp‘ is perhaps not. But of course, if Mary tells Paul, in Peter’s absence, ‘Your friend told me that you like wasps…’, the poor fellow will have no idea of just how much more his friend has told her. Again: absolutely nothing new here.

This episode and its predecessor are therefore intelligent, in that Peter is replaced by a computer, and Paul is artificially created by one in the first instalment. This is simply science-fiction doing what science-fiction should, and showing us new iterations of ages-old human issues made possible by technology. But the problems themselves are as old as mankind. There is nothing new under the sun.

None of this is creepy. None of this is inappropriate. None of this is unprofessional — especially in 'Booby Trap'. We are humans, for Christ’s sake, not robots. All this is extremely human — although I will agree that Geordi’s handling of the situation is, shall we say, clumsy. But that is precisely his trademark when dealing with the opposite sex. As such, these two episodes are good both as sci-fi and as character studies.

A final commentary: I am baffled at the amount of criticism Geordi gets from viewers over these two episodes. I believe this is a cultural phenomenon. I realise that in the United States these days — as well as in Scandinavia where I live — a current in society wishes to transform human beings into orderly robots, or robotic consumers. Disenchantment, in the Weberian sense, is everywhere around us. Society is increasingly desacralized. There is no magic garden any longer, no wonder. Everything is explained rationally and scientifically, with molecules and mathematics, and we humans are increasingly expected to behave rationally and scientifically, while increasingly being reduced to numbers in algorithms ourselves at the same time.

We see how this affects cognition, and argumentation. People increasingly attempt to win arguments based on statistics, not philosophy: numbers, not ideas. We are fast un-learning how to reason. 'Time is money', we are told, and in order to save eight seconds here and twelve seconds there, we are increasingly asked to forget how to think. Let technology do that for us. What a 'Brave New World' this is becoming: that nightmarish scenario is fast becoming true. And it is becoming one at an alarming pace.

Unfortunately, part of this discourse seems to have distorted the perceptions of younger generations of what it means to be human, to the point that even loving and caring gestures are deemed ‘inappropriate’. I have seen people online commenting that Melanie in Hitchcock’s ‘Birds’ (1963) is behaving ‘inappropriately’ for ‘breaking into' Mitch’s house to leave him the two lovebirds , and the note, for example. I have even read American students online commenting that Romeo is a ‘creep’ for ‘stalking’ Juliet, for crying out loud: this is how far removed younger generations seem to be of their own humanity today.

And here I see people complaining that Geordi, the nicest guy on all Star Trek, is a creep, too. Why is that? Is it because he is seen to behave like a pervert? No: it is precisely because he is see to behave like a human. What a truly scaring scenario this is.
Set Bookmark
Andy's Friend
Sun, Nov 12, 2017, 12:18pm (UTC -6)
Re: DSC S1: Si Vis Pacem, Para Bellum

Re diversity on Star Trek: Discovery:

The problem with the ‘diversity’ seen on DSC is that it is defensive in nature, not innovative.

The ‘diversity’ seen is not trendsetting: it is merely following trends. It is meant to avoid accusations of not living up to the sensibilities of some modern viewers—not to serve as inspiration to make us appreciate true diversity.

There is nothing particularly ‘diverse’ about showing blacks, or gays. They are people like everyone else, and virtually the entire target audience already acknowledges this. This is not a series for Muslim fundamentalists, after all.

I cannot understand the silly American obsession with wanting to see oneself on-screen. It’s puerile, self-absorbed, and actually quite pathetic. I don’t need to see straight white males as myself to enjoy a good story. If China were making great sci-fi with an all-Chinese female crew only, I would love to watch it. When I lived in India, I watched Indian tv and films almost exclusively. I don’t need a straight white male in ‘Devdas’ (1955) to grasp the beauty of that story. What an intelligent audience wants is good stories, and good writing. So far, DSC is offering none of that.

Showing diversity would be having a couple of Hindu bridge officers profess their undying mutual respect and affection in an arranged marriage after Indian—or Vulcan—tradition, and show that arranged marriage evolve to be a happy one.*

Showing diversity would be to have an exceptionally charming, male Muslim bridge officer marry three different women among the crew, and show that polygamous marriage evolve to be a happy one.*

Showing diversity would be to have say, three people of assorted races and sexes all knowingly date each other—and showing that polyamorous relationship to evolve to be a happy one.*

*Within the ‘normal’ parameters of ‘happy’ relationships, not utopian bliss.

Think about it. What would that tell us about diversity and tolerance, regarding just this one aspect—relationships, amorous relations, and marital traditions in other cultures—in the future society depicted?

But we know why this is not the kind of diversity we are shown, don’t we? The truth is, there is neither much creativity among the creative forces behind DSC, nor any desire to show a more tolerant and diverse future.

So, we get this rubbish ‘diversity’ of ‘black female’ and ‘gay’, which is nothing but deeply offensive if you stop and think about it for a moment.

A Russian, a Japanese, and a black woman meant something fifty years ago. The ‘diversity’ we see on DSC means next to nothing today. It is not innovative, and it is not provocative. All things considered, the only thing that is provocative about these characters—including the gay relationship—is the lousy writing affecting virtually all of them in virtually all episodes of Discovery.
Set Bookmark
Andy's Friend
Sat, Oct 28, 2017, 5:13pm (UTC -6)
Re: DSC S1: Lethe


Q is something entirely different, and you know it: he is a device to tell fabulous stories that deal with myths and archetypes. He is on an entirely different level of storytelling than Magical Tardigrades on Mushrooms[TM].

I had never thought I would be repeating Elliott's arguments, but there you are: the main point of Q is to present us with possibilities and challenges untold, without us having to delve on pedantic minutiae of plausibility. For we understand that the nature of Q is mostly symbolic, and that he functions on what is essentially a metaphorical plane.

In this interpretation, Q is the ultimate abstraction in Star Trek, beyond even the sort of outlandish alien existence I used to write that Star Trek should have more of, to force us to imagine and attempt to understand the truly alien. Who else but a seemingly omnipotent entity could put mankind on trial? Who else could tempt human beings with that sort of omnipotence?

Q has little to do with the technological debate you were having with wolfstar. Indeed, he even serves as an example of the possibility of the theory that any sufficiently advanced technology is indistinguishable from magic, making that magic-like quality precisely a crucial part of his function. In that other interpretation of his nature, he enables that very question, contrary to his nature as an abstraction that I first mentioned: is Q merely a being possessing extremely advanced technology?

Many Star Trek fans focus on the *form* of Q. What really matters, however, is his *function*. The ambiguous nature of Q is inherent to that function. Either way, whether as a near-omnipotent entity, or a being manipulating unfathomably advanced technology, Q is, essentially, the perfect enabler of stories.

Who else could transport the Enterprise to a distant part of the galaxy, to humble our heroes and give them a very necessary perspective on the challenges awaiting them? Who else could have one of our heroes die, only to give him an equally valuable perspective, and a second chance at life?

All this is on an entirely different level of storytelling from the sort of 'storytelling' we see on DSC. But there you have it: TNG was dedicated to thematically ambitious storytelling. I still don't know what DSC is about. Frankly, I also no longer care: this series is a complete mess.
Set Bookmark
Andy's Friend
Fri, Oct 27, 2017, 6:53am (UTC -6)
Re: DS9 S6: Behind the Lines


" Do you honestly think Hitler was doing PR during his campaigns across Europe in WWII? Dear god, some of you will excuse any bad writing with any ridiculous crap that enters your skull. "

Hitler in Paris and the Nazi German flag being raised over the Acropolis in Athens immediately come to mind.

DLPB, think of any major incident in the Eastern Front in WWII, or of many in the western front: what images come to mind? How many of those images are not German? Is the majority of photographic material, so many of the images we usually associate with a great many events in WWII, as in the examples above, not of German origin, made by officials of Nazi Germany?

You may wish to have a look at the German Wikipedia page on the Propagandakompanie of the Wehrmacht, the professional corps of German war photographers, filmmakers, journalists, etc., with a good introduction and links to several dozen of its most prominent individuals:

It fittingly opens with one of the most famous photos documenting its use: a photo exhibition in March 1940 in Berlin "documenting " to the Germans the fine war effort of their troops:,_Berlin,_Ausstellung_von_PK-Bildern.jpg

To see a few thousand photos by 462 Propagandakompanie photographers, see:

Note that many photographers more or less specialised: in certain theatres of war, but also, in certain types of photos, say, 'the everyday life of our troops' -- the canteen or sanitary facilities in barracks, soldiers doing routine maintenance of equipment, soldiers eating, playing games, or otherwise socialising, soldiers writing and sending letters home, etc.

All this is, to some extent, documentation. But it is also a deliberate selection of themes for specific purposes. It is also propaganda.

The same is true of photos of German troops interacting with occupied peoples. Usually, such photos humanise the German troops. Often, they humanise some of the occupied peoples. And some times, they dehumanise certain other occupied peoples. Which, where, when, and why? Again: propaganda.

Finally, note that some of the most memorable photos of the war were specifically staged for the photographer, even if seemingly taken in mid-action during some event, as if the photographer were simply a bystander taking a picture. Often, he was not: entire scenes were choreographed, rehearsed, and repeated in his honour, for his camera to capture. This also includes newsreels, etc. Pure propaganda.
Set Bookmark
Andy's Friend
Fri, Sep 29, 2017, 4:58pm (UTC -6)
Re: DSC S1: General Discussion

@ Jammer

Indeed. I was referring to the three latest films, for as I also just wrote, I haven't seen Discovery, and I don't presume to be categorical on what I haven't seen. And in any case, a pilot episode, and one without most of the crew absent at that, is not enough to give anyone a clear indication, of course. Let's see what happens, and hope for the best.
Set Bookmark
Andy's Friend
Fri, Sep 29, 2017, 4:32pm (UTC -6)
Re: DSC S1: General Discussion

@ Brian S.

You're absolutely right about everything you just wrote. But there is more to it than that. While I cannot presume to speak for Michael, I should say this:

TOS did something amazing: it had its cake and ate it, too. Meaning it created a diverse crew, to promote diversity, as you just wrote. But it didn't do it at the expense of great stories: it told great stories, also. And many of those stories were monuments to humanism: there was a *coherence between the cast and the stories, between style and substance*.

Modern 'Trek' doesn't do this. Modern Trek, meaning the post-Berman age of the J.J. Abrams films, tells atrocious stories. Modern 'Trek' doesn't care to edify, doesn't care to inspire, doesn't care to provoke our thoughts *story wise*.

Therefore, all its diversity is superficial only. And therefore, it becomes extremely frustrating to see such focus on what is but hollow and token diversity. "Look, we have transmorphics and robosexuals among the bridge officers!" This is shallow, and puerile. It's all about style. The stories told simply don't support any claims of humanism. What does it matter, then, that there are transmorphics and robosexuals among the crew, when all is but a cynical, shameless lie?

As I said, I can't presume to speak for Michael. But I for one resent all the emphasis given to the composition of casts, when the stories told are as atrocious as they are. Regrettably, the American public seems to care more about having x% blacks, x% Asians, x% females, and x% robosexuals among the cast than what stories are actually being written and told.

In that sense, all the talk about diversity in 'Star Trek' nowadays strikes me as very tiresome: for not only is it superficial, but even worse, it is cynical, and calculating. It is no longer setting the trend, as it once was: it is merely following it, expecting to get our hard-earned money in return. Diversity doesn't get much more fake than that, does it?
Set Bookmark
Andy's Friend
Fri, Sep 29, 2017, 2:29pm (UTC -6)
Re: DSC S1: The Vulcan Hello / Battle at the Binary Stars

@Chrome, @Omicron (below)

Chrome: "Re: the pointless "SJW" witch hunt, can't we just use normal words like activists or progressives or something? I feel like SJW is a net-only derogatory term that just leads to polarized discussions"

You're right about the last part, but that is true of any term, regardless how correct it is: using terminology to divide people and limit them to one of two positions in a discussion, even when objectively correct (positivism vs hermeneutics; idealism vs physicalism; isolationism vs contextualism, etc., etc.), tends to be divisive, as if no compromise, not even dialogue is possible.

But---and this is a big but, for I am not American and don't live there, and American reality, from my European point of view, seems tragically and almost hysterically warped these days---I also tend to view "SJW" (a term I never use myself) as a very unpleasant type of personality.

The way I see it (and I may be wrong), SJWs are beyond simple "activists" and "progressives", as you suggest. To me, they seem to be radicals: the sort of unpleasant people suffering from some monomania, whether animal rights, women's rights, or whichever cause they have become enamoured with.

Such people tend to be exceptionally obnoxious, as they seem to live to see transgressions of the particular cause they have adopted everywhere. They see little else and speak of little else, but speak about it a lot.

I agree with you that we should avoid simple labels; we are all more than Marxists, meat-eaters, or Real Madrid fans. But I always thought that the more intelligent type of people who nevertheless use such terms used it to denote an excessive zeal of some sort---whether a radical, a true fanatic, or simply a youth who just wants to belong somewhere, doesn't really have a clue of what he or she is talking about, but does so excessively.

Tell me, is this perception wrong? Is the term really used that loosely?


I'm with you on this one. I haven't watched Discovery yet, and may never. The trailer suggests anything but Star Trek to me: take away the familiar badge etc., and all you have left seems to be a war saga in space. None of what has been written here indicates otherwise.

That is not what Star Trek is about. Star Trek is single stories---episodes---dealing with single issues: myths, as Elliott, who doesn't seem to frequent this site anymore, so well used to put it. Star Trek is larger than life. Star Trek doesn't need continuous story-arcs and character development, for that is not what Star Trek is about.

It's funny: I used to disagree with Elliott on many particulars, but he was absolutely right on the universals. Star Trek is about myths, and archetypes. And above all, like so many of you here have noted, it's about making us believe in a brighter future.

Let us compare the levels of ambition. In "Encounter at Farpoint", TNG began its run by putting humanity on trial by an enigmatic entity who was, shall we say, a little more powerful than you and I. And that entity said it himself, all those episodes later---the trial never ends: for it's about the unknown possibilities of existence.

Forget about flaws in execution: there is a greatness to that episode, an ambition that sets the tone for what TNG would become, and also reflects what Star Trek is all about. I just can't see that ambition in anything the many commenters here have written about these episodes.

P.S: I have read all the comments here by all (took me a while!). Thanks, everyone :)
Set Bookmark
Andy's Friend
Sun, Sep 17, 2017, 4:46am (UTC -6)
Re: ORV S1: Old Wounds

@OmicronThetaDeltaPhi, @Cosmic,

For the record, I have not seen the Orville. Just want to point out a misunderstanding on the part of Cosmic.

OmicronThetaDeltaPhi: “So please explain to me: How is our treatment of these two shows "a double standard"? It certainly seems consistent to me."

Cosmic: “Light tone = I like this. Dark tone= I don't like this. (…) Treat a light toned Trek-style series as something amazing, treat a dark toned Trek show as something terrible. Double standard.”

Cosmic, that is not the definition of double standard. An example of double standard is this:

1 - Treat a *light toned* Trek-style series *with a Western, English-speaking crew* as something amazing.
2 - Treat a *light toned* Trek-style series *with a non-Western, non-English speaking crew* as something terrible.
...even if everything else (story, sets, music, etc.) is exactly the same.

Here, you no longer have one standard, but two, that both must be met to make you happy. It is when you discover that to someone, it is not only a matter of A, but also B, when all that someone's great speeches of A turn out to be hollow unless also that tiny, little, hidden B -- that you may accuse them of double standards.
Set Bookmark
Andy's Friend
Wed, Sep 14, 2016, 7:20am (UTC -6)
Re: DS9 S4: The Way of the Warrior

Damn, Zebra, you beat me to it! :D
Set Bookmark
Andy's Friend
Wed, Sep 14, 2016, 7:19am (UTC -6)
Re: DS9 S4: The Way of the Warrior


"Is this the most commented on episode yet?"

Not quite. This one has 67 comments as of today. A quick check of ten I knew would have more shows the following:

TNG's "All Good Things..." has 102
TNG's "Darmok" has also 102
TNG's "The Measure of a Man" has 108
VOY's "Threshold" has 111
TNG's "The Inner Light" has 128
VOY's "Tuvix" has 136
DS9’s "In the Pale Moonlight" has 155
ENT’s "Cogenitor" has 176
DS9’s "Far Beyond the Stars" has 211
ENT’s "Dear Doctor" has 255

And I’m confident that there are a few more episodes with more than 100 comments. With more than 67 there are for sure. So don't worry: it would seem that you still have lots of reading to choose from. :)
Set Bookmark
Andy's Friend
Fri, Jul 8, 2016, 2:48pm (UTC -6)
Re: DS9 S7: What You Leave Behind

@Nathan B. and Peter G.

Re Sisko as Jesus or Abraham, I tend to agree with Peter. It seems to me that Nathan is focusing on Sisko as *form*, which indeed most resembles Jesus; while Peter is speaking of Sisko more as *function*, which equally obviously more resembles that of Abraham. As I believe that, in this context, function is more important than form, I tend to agree with Peter. But you are actually both right, in different ways.

Having said that, however:

PETER G.―"simply, Sisko isn't perfect, and isn't any kind of messiah. He is part-prophet, but Jesus wasn't part god; according to Christianity he *was* god."

I may be reading you wrong, but it seems to me that you've got it all wrong. Jesus was, indeed, only part god.

Jesus was both *fully divine* and *fully human,* and as such, only part God. This is called the Hypostatic Union: True God and True Man. It is fundamental Christology of all major Christian denominations that survived the 5th century, and the Council of Chalcedon (451), the only exceptions being the Oriental Orthodox Churches: the Armenian, Coptic, Ethiopian, and Syriac Orthodox, and a couple of ofshoots. These are extremely ancient churches, in communion with each other, but not with any other Christian churches.

Other than the above, the Hypostatic Union was maintained by all: the Catholic Church, all Protestant denominations after the Reformation, as well as the Orthodox Church. As I wrote, it's fundamental Christology.

What you are suggesting sounds like Monophysitism: the belief that the divine nature of Jesus is somehow more important than his human nature.

You should know that Monophysitism was condemned as a heresey at Chalcedon, and a very serious one―denying the nature of Christ (of which it is only one variant).

This is no superstition or white magic charge: it would have you condemned very severely as a heretic until fairly recently. Be glad you're writing this in 2016, and not 1616―or the Holy Inquisition (gasp!) would be knocking on your door anytime soon... ;)
Set Bookmark
Andy's Friend
Thu, Jul 7, 2016, 1:50pm (UTC -6)
Re: DS9 S4: The Muse

@Peter G.

As regards "active, practical power, such as the ability to issue orders or make decrees," we really have no idea, do we? We know that noble Houses play a significant role in Klingon society and politics. Is the same true on Betazed?

What is the Fifth House exactly? Is the the fifth of five or more collateral lines, or Houses, of the ruling Royal House? Or is it actually the fifth ruling dynasty, in chronological terms?

Is the Fifth House in a bitter feud with the "Third House" for supremacy on the "Council of Rixx?" Or is it just a sentimental memory of eras past?

PETER G.―"I have a hard time believing the Federation would admit a member that actively endorsed a ruling class that had practical power over the lower classes and could order them around."

Based from what we've seen on Star Trek, I mostly agree. A little "ordering them around" might be tolerated; but I also believe that Federation policy must have some clear limits to how the lower classes are ruled.

But that doesn't preclude the Fifth House from being in a bitter feud with the "Third House" for supremacy on the "Council." What powers such a hypothetical Council might hold is speculation. Maybe its role could be to conduct foreign policy. Maybe it could be something else entirely. The point is, it is perfectly possible to have a conspicuously aristocratic political system, *and* democracy at the same time: the two are not actually mutually exclusive.

If the aristocratic ethos in society is strong enough, and/or specific requirements are demanding enough that only aristocrats can meet them, the people will simply vote for the aristocrats for certain specific bodies, and/or certain specific functions, and political rivalry will then be a question of "Fifth House vs Third Houses". Aristocrats may then continue to exert considerable or even overwhelming influence, even with a democratic political framework.

PETER G.―"That automatically rules out having a noble class with real political powers, since that setup is strictly anti-egalitarian."

See what you did there? You are conflating concepts: democracy is not necessarily strictly egalitarian. Isonomia, isegoria, and isokratia are different things. The commoners of some alien species may be perfectly happy with their equal rights before the law, or their equal rights to adress authorities and have their cases heard, or their single vote, and not demand equal participation in politics. Taken to its extreme, if the people systematically wishes to vote for aristocrats only, a democratic system may actually enforce strict aristocratic rule.

And there are other ways this can happen: democracy can take on many guises. As I wrote, we can perfectly imagine a poly-synodal system where aristocracy exerts significant power in certain bodies without affecting participation of the people in others, and the overall democratic nature of the system. The British House of Lords, until very recently, was the prime of several examples in the West. In most of Europe, we slowly eroded such aristocratic power over the course of the 19th and 20th centuries. But what if the aristocratic ethos on some alien world is so widespread among the people that such a process has not occurred, and aristocratic institutions instead have even more power, sanctioned by the people, than they traditionally enjoyed in Earth constitutional, parliamentary monarchies?

Simply put, what if the people *wants* to be ordered around by aristocrats? Is it so outlandish a thought? What if aristocrats, due to biological differences, are actually, much as the joined Trill, demonstrably superior, in one way or another (always beware of anthrocentrism...), and therefore better suited for some particular fields? A democratic division of power, placing some, or even much of it in the hands of the aristocracy isn't difficult to imagine; this is science-fiction, after all. We cannot simply rule out powerful, and fully constitutional and institutionalized aristocratic influence in democratic systems on alien worlds. In fact, I would be surprised if Trill doesn't develop into one such hyper-aristocratic-within-a-democratic-framework hybrid system over time.

But indeed, as so often on Star Trek, we have no clue as regards the specifics, in this case, Betazed. I guess it's the writers' way of making everybody happy: everything is left vague enough that anyone can have whatever they wish to believe be true. I like Lwaxana, and I like aristocracy, so I say the Fifth House rules the Council of Rixx! :)
Set Bookmark
Andy's Friend
Thu, Jun 30, 2016, 9:01pm (UTC -6)
Re: DS9 S4: The Muse

Interesting talk. We can't really know, can we? But if we do what I dislike and use Earth as benchmark, here's my take on Lwaxana, with a quote about something else I wrote in "Cogenitor":

"I actually had a very interesting discussion once about this, trying to describe the differences between what is a Viceroy, and what is a titled noble: ranks, privileges, and such. It boils down to this: a Viceroy represents the Monarch, and rules in his stead. But his power is confined, in space, and in time. Outside his Viceroyalty, he enjoys lesser privileges. After his term has ended, he is what he was before.

A Duke is a Duke, whether he is 8 years old or 88. He enjoys all the privileges of his rank at any time, anywhere within the realm and the empire, and in the good old days in other kingdoms and empires as well. Until a few years ago when Spain joined the European Union, for example, every Spanish Duke held a diplomatic passport as default. He was seen as an old lineage, an embodiment of history, and a representative of the Kingdom of Spain. He was more than a man."

I believe that, continuing the example, Lwaxana, too, is an old lineage, an embodiement of history, a representative of the World of Betazed. She is more than a woman.

I therefore think that there are strong reasons to believe that what constitutes Lwaxana's power and prestige *on Betazed* is her royal lineage, and not some random status as Ambassador. That status as Ambassador, as Chrome points out, is almost certainly, much like the Spanish Dukes with diplomatic passports by default until a few years ago, most likely only because of that royal prestige.

I imagine most commenters here are American; and therefore, this for you may perhaps be a little more difficult to fully assimilate. Any British, French, Spanish etc. Duke is, above all, a Duke―not a Prime Minister, Ambassador, or whatever. That is only temporary; and, if you ask me, largely irrelevant. Churchill was more than a Prime Minister: he was a Churchill.

In fact, Churchill is a wonderful example of what Chrome and I mean. Is it far-fetched to believe that the little house he was born in contributed considerably to his career?

Say the names: Bedford. Brissac. Béjar. Norfolk. Noailles. Nájera. To most well-educated people in Britain, France, or Spain, this is all one needs to know. I couldn't personally care less about offices: they just come with the name, and they come and go. What matters is the name: for that is intemporal.

If the Betazoids are anything like us, Lwaxana is a Daughter of the Fifth House, Holder of the Sacred Chalice of Rixx, Heir to the Holy Rings of Betazed. Her current office should be completely irrelevant.

Also, we must differentiate between the Federation and the individual homeworlds. This is something William B and I have written extensively about, and also the usual suspects: Paul M., Robert, Yanks, etc. As William noted, the Vulcans are notoriously different from us socially. Paul noted the same for the Trills, which I then much elaborated on.

Peter G. now "can't really imagine there being such a thing as practical nobility on a Federation world." It depends on definitions of nobility. The joined Trill, as a concept, are an ultra-elite hyper-aristocracy that far surpasses anything we have ever had on Earth: beings bound by force of biology to be *better* than non-joined Trill, and by statistical probability to remember the memories of their own ancestors. They are, quite simply, *superior beings.* I call this a practical nobility of the highest order.

And isn't it interesting that we observe a similar reverence for the joined Trill *among the Trill* as for the truly high-born on Earth *in monarchies,* or countries with a long aristocratic tradition?

This is important. I can speak much better about nobility with an Indian, or a Japanese, than with a Canadian or a Chilean, for the latter quite simply have no real idea of what nobility is, of what it means to have noble Houses permeate not only a thousand years of history, but the top levels of society today.

It is a little bit like the Emperor of Japan. He may hold much less power than the President of the United States; but he holds it for life, not four or eight years. And outside the United States, he enjoys much more prestige. Or perhaps it would be better to say: a different kind of prestige. The President of the United States will typically be admired for what he has *achieved.* The Emperor of Japan is simply admired for what he *is.*

I am reminded of a memorable quote by Camacho once (legendary left back for Real Madrid and Spain in the 70s and 80s, and later manager for the club and the Spanish national team. Real Madrid are the most winning football club in the world; they just won their 11th Champions League a month ago):

Camacho, talking about the attitudes of fans towards football clubs, also noted the different "kinds of prestige." He said: "Real Madrid is feared everywhere, and Real Madrid is respected. But Real Madrid is not loved."

I have thought much about this ever since, because it is about much more than football. It is about the feelings we humans feel.

The Emperor of Japan is loved in Japan, just as the King of Thailand is loved in Thailand, in a way no President of the United States has been in the US in a very long time, if ever.

This is what I mean: we must consider how royalty is regarded in their own culture. If the Betazoids are anything like us, Lwaxana is a Daughter of the Fifth House. She may not be the Empress of Japan; but she's likely at least the Duchess of Devonshire. Whatever office she currently holds is largely irrelevant, for it is temporal only. The Sacred Chalice of Rixx is intemporal.

And how can we see this? Precisely because she never refers her title as Ambassador. That, to her, is completely irrelevant. As it would be, I imagine, for most Betazoids. Just like a Duke's identity is not about his office: it is about his heritage.

Deanna is a typical son or daughter. Fortunately, we humans can be completely irreverent at times, and even mock that prestige we simply take for granted. That doesn't mean it isn't there.

Deanna's attitude towards her mother is also a bit like when we criticize our own country. I may criticize my country as much as I please. But if you begin critizing my country, be sure to tread very, very carefully...

As to Homm, if we follow human benchmarks, he's a manservant. Not much to discuss there, is there?

But as Chrome points out, the series itself never clarifies any of this; my take on it is only if we use human standards as guiding light. So anything goes, and the more outlandish your theories, the better ;)
Set Bookmark
Andy's Friend
Tue, Jun 28, 2016, 5:18pm (UTC -6)
Re: TNG S2: The Measure of a Man

@Peter G. & William B

Peter, I don’t know what it is you don’t understand. Everything you ask me to clarify I already have in my two posts from 2014.

Try reading what William writes. He has understood perfectly what I mean:

WILLIAM B―”What you are stating, essentially, is that it will at some point be possible to distinguish between what is actually conscious and what isn't not based on behaviour or anything, but based on the physical make of the object itself.”

Which is partly (but only partly: see below) correct, and what I wrote to begin with in 2014 specifically about Data & the EMH:

“It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.”

In very simplified terms, not WHAT, but HOW. But I further clarified yesterday:

“if we widen our scope, what I have called the "artificial brain" is merely a word for some sort of cognitive architecture which may be very different from our own. The Great Link seem to have one, and I'm pretty sure it's quite different from Data's brain.”

Another thing: you seem to have misunderstood my point about the “religious” aspect. What I mean is that we all, deep down, are predisposed not merely to accept, but to actively prefer, and choose, one specific possibility, one theory, as true. Einstein famously did it, and it took him many years to recognize his fault. It’s just the way we humans are. In this, our opinions are akin to religious beliefs: William B’s, yours, and mine. Some of us are better at listening to reason than others. But as long as matters remain highly speculative, no reason is more true than any other. And all we really have are our humours, our moods, our feelings (because even intellectual choices are based on emotions) to guide us.

So your comment:

“If positing a theory about robotics makes someone a 'religious believer' that you can't communicate with...”

...was completely uncalled-for.

Now William:

Having said that, I do believe that you have a point, and the Great Link is a good example. What I mean is, to use your words above, it is necessary for us to be able to *recognize* and *understand* the nature and abilities of the "physical object" itself.

In other words, while we may be able to recognize Data’s “positronic brain” as an artifical brain able of consciousness, simply because it resembles and emulates what we know, we may not be able to recognize anything as alien as the Great Link as another kind of physical object capable of consciousness. And in such cases, at least at first, we will depend on behavioural analysis. And who knows if we will ever be able to understand the Great Link?

So in a way, both sides are right. And you are very right: we will probably always remain somewhat anthrocentric. It is difficult not to, when that is what we know and understand best. And if we indeed ever gain warp capability, who knows what new life we will encounter?

Finally, just to correct a slight oversight of yours, you wrote:

“You said in an earlier comment that it is the fault of the show that it fails to establish what Data's artificial brain does.”

No, that’s not what I said: I agreed with you. Try reading it again ;)
Set Bookmark
Andy's Friend
Mon, Jun 27, 2016, 6:05pm (UTC -6)
Re: TNG S2: The Measure of a Man

@William B & Peter G.

I was writing to Peter, but I'll answer William's last comment first because it can be done very quickly: I basically agree with everything you wrote.

I think you're quite right about the episode being robbed of its power without uncertainty. It dares ask great questions. It follows it should not provide certain answers. And you are right: knowingly believing in something uncertain is a very powerful thing. It is what makes faith, true faith, indestructible.

I also think you're very, very right regarding Data's multiple roles, as in a mascot of the autist & Asperger's communities. What makes Data so fantastic is that he is so many people in one: the Child, the Good Brother, the Autist... The Android is actually pretty far down the list in importance. This is undoubtedly why he is so beloved: most of us can find a part of ourselves in him. Mirrors, was it, William?

I have a little more difficulty in seeing the Doctor in quite the same multi-faceted fashion. Every Star Trek fan I know likes the Doctor a lot, but for very different reasons that they like Data: they do not receive the same kind of love.

I particularly like your reference to Q, because, as you'll remember, that is my recurring theme: the humanoid & the truly alien. And you're of course right: any truly alien might question our human consciousness; and, if we widen our scope, what I have called the "artificial brain" is merely a word for some sort of cognitive architecture which may be very different from our own. The Great Link seem to have one, and I'm pretty sure it's quite different from Data's brain.

Also, and this is answering both of you now, it is true that we cannot know with absolute certainty that Data's "positronic" brain is an artificial brain. There are strong indications that it is, but we cannot know for sure; and it is true that Data, too, could simply be another Great Pretender.

This leads me to that most interesting aspect: faith. I was going to answer William earlier:

WILLIAM B―"I think that a system sufficiently sophisticated to simulate "human-level" (for lack of a better term) sentience may have developed sentience as a consequence of that process." saying that that sounds an awful lot like wishful thinking. By that I mean that this is a little bit like discussing religion. If you strongly believe that (I’m not saying that William does), nothing I can say will change your mind. There are still highly intelligent scientists who share that belief, in spite of all the advances we've made in the past decades in both neuroscience and computer science. It is, quite simply, a belief, akin to a spiritual one. Some people *want to believe* that strings of code, like lead, can turn into gold.

But that of course is a bit like my belief that Data's positronic brain is an artificial brain, i.e., some sort of cognitive architecture affording him consciousness. I, too, *want to believe* that he has that artificial brain. Because to me, Data would lose his magic, and all his beauty, were it not so. As I wrote, there are very strong indications that this interpretation is a correct one; but as in religion, I have no proof, and I must admit that it is, ultimately, also an act of faith of sorts. I want Data to be alive. To me, Data wouldn't make much sense otherwise. And I know full well that this is, deep down, a religious feeling.

I'm sorry, guys, it's getting late here in Europe... Until next time :)
Set Bookmark
Andy's Friend
Mon, Jun 27, 2016, 10:15am (UTC -6)
Re: TNG S2: The Measure of a Man

@Peter G. & William B

Like you, I also think very highly of this episode. It has a good script, with some very memorable lines, memorable acting by especially Patrick Stewart, and it is thought-provoking, and was perhaps even more so when originally aired, as it ask questions that we will undoubtedly have to ask ourselves one day, and touch the core of our own existence: what does it mean to exist?

But as I wrote above, and precisely because I consider the matter important, I feel that we must necessarily consider not only the in-universe data available, but also, real, hard science.

This means that while I may agree with you, in-universe, on a number of points, all that is trumped, in my opinion, by real science. Maddox, Moriarty, and Ira Graves are important: but they are so especially as glorious vehicles to ask important questions. And the answers, I find, must usually be sought outside the Trek lore.

This is not a criticism, quite the contrary. It is precisely why this is Star Trek at its very finest: as inspiration for further thought outside itself.

As such, consider Peter G. now:

PETER G.― "5) [...] if you're going to look strictly at their behavior and learning capacity side-by-side, the Doctor's much more closely resembles that of humanoids than Data's does. To be honest, my inclination is to ascribe this to lazy writing on the part of Voyager's writers in not taking his limitations nearly as seriously as the TNG writers did for Data [...]"

Very, very good point, Peter. But notice one word you wrote: "RESEMBLES". Resembles matters not, Peter: see the last three phrases of this comment. And also: why, but why, after such a good observation, do you immediately after it write

PETER G.― "5) [...] however what's done is done and we have to accept what was presented as a given."

I don’t think so, Peter. I love Star Trek, and especially TNG. But we must be able to love the forest, and cut down a few trees every now and then to improve the view.

Moving on:

PETER G.― "2) As William B mentioned, you state quite certainly that the Enterprise computer is distinctly different from Data's "brain", and that this mechanical difference is why Data can have consciousness and the computer can't. What is that difference?"

It is that never, ever, are we given the impression that the Enterprise computer is the equivalent of an *artificial brain* in the scientific sense, whereas it is extremely obvious from the onset that Data’s is one such creation.

PETER G.― "3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean?"

And there you have it: we are clearly having two different conversations. You are speaking Trek-speak. I am speaking of science. But the good thing is, the two can actually combine. I see this episode as an invitation, to all viewers, to further investigation of these elevated matters. I suggest you investigate, Peter. It’s much easier today than it was in 1989 ;)

As for William, you are absolutely right when you write that "the episode is of course not suddenly worthless if the characters within it make wrong or incomplete arguments." I wish in no way to detract from this wonderful episode, and I greatly appreciate what Snodgrass tried to do here, and indeed, mostly accomplished. And I could not possibly expect the writer back in 1988 to be an expert on artificial consciousness.

This is thus merely to say that I find this particular talk of ours a little difficult, because you both tend to use in-universe arguments much more than I do. You just wrote, for instance:

WILLIAM B―"However, I am not that certain that the brain being a physical entity is what is important for consciousness. Of the major holographic characters in the show... "

That is of course completely legitimate: to consider what Star Trek says, and not science―to judge Star Trek on its own terms. And I am frequently impressed by the amount of detail you seem to remember. Allright, then: what you then must do is investigate the coherence of the in-universe cases. Allow me three examples:

You first give an outstanding example of what I mean with an in-universe case:

WILLIAM B―"Further, we know from, e.g., The Schizoid Man, that Data's physical brain can support Graves' personality in a way that the Enterprise computer cannot (the memories are still "there" but the spark is gone)."

Precisely. But then, you write:

WILLIAM B―"Minuet is revealed to be a ploy by the Bynars, and whether she is actually conscious or not remains something of a mystery..."

No: it is only a mystery in the moment. The later example you give above retroactively affects "11001001," as it proves, even in-universe, that she cannot be conscious. Minuet is a program running on the Enterprise computer, just an even better program. But in your excellent wording: there is no spark.

And then you write:

WILLIAM B―"The holographic Leah actually is made to be self-aware..."

Maybe it is, and maybe it isn't: we must distinguish between various levels of self-awareness and consciousness. Many robotic devices on Earth today are beginning to exhibit the simplest traits of what to an outsider might appear as rudimentary self-awareness. But we must distinguish between self-awareness and mere artificial intelligence, or basic programming. If a robot vacuum-cleaner drives around a chair, you don't consider it sentient, do you? And if a robotic lawn-mower were programmed to say: "I'm with you every day, William. Every time you look at this engine, you're looking at me. Every time you touch it, it's me," you wouldn't call it self-aware, would you?

An example: toys are being programmed to react to stimuli, and can both say “Ouch!” and cry, etc. Questions:

1―Is the robotic doll that identifies a chair on its path, and walks around it, or even sits on it, self-aware?
2―Does it hurt the doll that says “Ouch!” if you drop it―even if you provide it with sensors able to measure specific force, and adjust the “Ouch!” to the force of impact?
3―Is the doll that cries if you don’t hug her for hours truly sad―even if it is programmed to cry louder the longer she isn’t hugged?
4―If the doll is allowed self-programming abilities, and alters its crying to sobbing, does that alter anything?
5―If the doll is programmed to say that it is a doll, manufactured at such and such place, at such and such date, and that its name now is whichever you have given it; and that it will take damage to its internal circuitry if you kick it, and beg you not to kick it as it will damage it, and hurt it, and begin to cry and sob, does that constitute any degree of self-awareness, or consciousness?
6―If you multiply the level of programming complexity a zillion times, does that change anything at all?

Would HAL dream? When that film was made, a considerable number of scientists would have answered yes. Not so today. The numbers of scientists who adhere to the thought that any sufficiently advanced computer program will result in artifical consciousness―a major group some fifty-sixty years ago, when Computational Speed was a deity to be worshipped and Man would have flying cars by the year 2000―have dwindled considerably since this episode was written. Paradigms have changed. Today, most say: computational speed, and global volume of operations, matters not. An infant child asleep has considerably less brain activity than a chess champion during a tournament match. That does not make it any less sentient.

Maybe you remember those days when this episode was written. The ordinary public, to which I suspect most Star Trek writers must be considered in this context, were marvelled then―or terrified―by IBM's Deep Thought (I live in Copenhagen, remember? I’ll never forget when it beat Bent Larsen in 1988), and the notion that a machine would, some day soon, beat the best chess players alive―regularly. And as I have written elsewhere, today any smart phone with the right app can beat the living daylights out of any international grandmaster any day of the week. But it isn't an inch closer to having gained consciousness, is it?

This divide, of intelligence vs consciousness, is extremely important. Today, we have researchers in artificial intelligence, and we have researchers in artificial consciousness. The divide promises―if it hasn’t already―to become as great as that between archaeologists and historians, or anthropologists and psychologists: slightly related fields, and yet, fundamentally different. The problem is, that most people aren't aware of this. Most people, unknowingly, are still in 1988. They conflate the terms, and still speak of irrelevant AI (see this thread!). They still, unknowingly, speak of Deep Thought only.

So my entire point is, this episode ends up being about Deep Thought. While the underlying, philosophical questions it asks, which science-fiction writers have asked for nearly a century by now, are sound, and elevate it, it is of course a child of its time: at the concrete level, it misses the point. It wishes to discuss the right, abstract questions, but doesn't know how to do it at the concrete level: it essentially reduces Data to Deep Thought.

But... I believe this was on purpose! As William points out, we had recently had the Ira Graves episode. In Manning & Beimling's story, the nature of Data's positronic brain was key. But to Snodgrass', it was detrimental. I am convinced that she was fully aware of the shortcomings of her story: I believe that she doesn't use Data's positronic brain as an argument, because it is a devastating one: it shreds Maddox apart. There would be no episode if she made use of it. And even worse: many viewers, in 1989 and even today, wouldn’t understand why. So she wisely ignores it: she refers Data’s positronic brain en passant only, but does not use the logical consequence of *a frakking artificial brain!* during the trial itself. Instead, she uses arguments of the Deep Thought kind viewers might be expected to understand in 1989―and today. And so we get this compelling drama. It’s on a much lower level of specific abstraction, but it can be enjoyed by many more.

And for once, I can accept that choice. Normally I call this manipulative writing: it's like 'forgetting' Superman has super-strength and allowing common thugs to kidnap him, because we have to get the story started. But in this case, in the name of the higher purpose, I gladly give it a pass.

The problem in all this, of course, is that the various episodes wish to captivate the audience. Therefore, they have to allow for the possibility of the impossible, and they have to leave things as mysteries: is Minuet, or Moriarty, sentient? Of course not. But just having Geordi laugh and say "Captain, that's just a program!" and kill the magic right then and there would kill the episodes. Still, we must not let good story-writing cloud our judgement. We must be able to enjoy a good story, while saying, "Wonderful! In real life, however... Captain, that's just a program!" Ironically, B'Elanna actually bluntly says just that of the EMH. But apparently, few of the fans take her seriously.

On another matter, William made an astute observation:

WILLIAM B―"It is worth noting that most forms of intelligence in TNG end up taking on physical form... "

Allow me to improve on that with my favourite theme: they end up taking *humanoid* form. The only reason people take Leah, Minuet, the EMH, Moriarty, and perhaps even Data seriously, is very simple: they look human. Make them a teddy-bear, a doll, and a non-humanoid robot, and these conversations wouldn't be happening.

I'll give you an example: ping-pong robots and violin-playing robots. Industrial robots have extraordinary motion control and path accuracy these days. But if playing a violin is no different than assembling a Toyota, playing a human requires a lot more than your standard cycle pattern deviation. Yet, pretty soon, robots will crush the best human ping-pong players as easily as chess engines today do chess players. And already today, ping-pong robots show good AI. Now, combine an advanced ping-pong cum violin-playing robot in a Data-body, and let it entertain the audience with some Vivaldi before destroying the entire Chinese Olympics team one by one. How many would start wondering: how long until it becomes alive? How long before it will dream? But show a standard ping-pong robot do the same, and they'll simply say: cool machine.

Now imagine a non-humanoid lifeform, leaving humans no possiblity of judging whether it was sentient or not. So you’re right, William: if you'll pardon the pun, form is of the essence.

...and by the way: notice that you did it again. "Intelligence" is irrelevant, William. Intelligence can be programmed, already today. Consciousness cannot: we can barely understand it.

Finally, a couple of last notes:

1. We may wish to simply turn off Leah, or Minuet―*and just do so.* The EMH, due to the importance of its function, cannot simply be ignored. But being forced to treat a program as self-aware does not make it self-aware.

2. WILLIAM B―"I guess one question is whether nonlinearity would actually be necessary to convincingly simulate human-level intelligence/insight"

Exactly: it wouldn't. But notice, just like Peter G. above, your choice of words: "SIMULATE." Who on Earth cares about simulations, Turing tests, and mere artificial intelligence, William? Do. Or do not. There is no simulate.
Set Bookmark
Andy's Friend
Sat, Jun 25, 2016, 9:11pm (UTC -6)
Re: TNG S2: The Measure of a Man


Sat, Nov 1, 2014, 1:43pm (UTC -5)

"@William B, thanks for your reply, and especially for making me see things in my argumentation I hadn’t thought of myself! :D

@Robert, thanks for the emulator theory. I’m not quite sure that I agree with you: I believe you fail to see an important difference. But we’ll get there :)

This is of course one huge question to try and begin to consider. It is also a very obvious one; there’s a reason ”The Measure of a Man” was written as early as Season 2.

First of all, a note on the Turing test several of you have mentioned: I agree with William, and would be more categorical than him: it is utterly irrelevant for our purposes, most importantly because simulation really is just that. We must let Turing alone with the answers to the questions he asked, and search deeper for answers to our own questions.

Second, a clarification: I’m discussing this mostly as sci-fi, and not as hard science. But it is impossible for me to ignore at least some hard science. The problem with this is that while any Trek writer can simply write that the Doctor is sentient, and explain it with a minimum of ludicrous technobabble, it is quite simply inconsistent with what the majority of experts on artifical consciousness today believes. But...

...on the other hand, the positronic brain I use to argue Data’s artificial consciousness is, in itself, in a way also a piece of that same technobabble. None of us knows what it does; nobody does. However, it is not as implausible a piece of technobabble as say, warp speed, or transporter technology. It may very well be possible one day to create an artificial brain of sorts. And in fact, it is a fundamental piece in what most believe to be necessary to answer our question. I therefore would like to state these fundamental First and Second Sentences:

1. ― DATA HAS AN ARTIFICIAL BRAIN. We know that Data has a ”positronic brain”. It is consistently called a ”brain” throughout the series. But is it an *artificial brain*? I believe it is.

2. ― THE EMH IS A COMPUTER PROGRAM. I don’t belive I need to elaborate on that.

This is of the highest order of importance, because ― unlike what I now see Robert seems to believe ― I think the question of ”sentience”, or artificial consciousness, has little to do with hardware vs software as he puts it, as we shall see.

Now, I’d like to clarify nomenclature and definitions. Feel free to disagree or elaborate:

― By *brain* I mean any actual (human) or fictional (say, the Great Link) living species’ brain, or thought process mechanism(s) that perform functions analogous to those of the human brain, and allow for *non-linear*, cognitive processes. I’m perfectly prepared to accept intelligent, sentient, extra-terrestrial life that is non-humanoid; in fact, I would be very surprised if most were humanoid, and in that respect I am inclined to agree with Stanilaw Łem in “Solaris”. I am perfectly ready to accept radial symmetric lifeforms, or asymmetric, with all the implications to their nervous systems, or even more bizarre and exotic lifeforms, such as the Great Link or Solaris’ ocean. I believe, though, that all self-conscious lifeforms must have some sort of brain, nervous system ― not necessarily a central nervous system ―, or analogues (some highly sophisticated nerve net, for instance) that in some manner or other allows for non-linear cognitive processes. Because non-linearity is what thought, and consciousness ― sentience as we talk about it ― is about.

― By *artificial brain* I don’t mean a brain that faithfully reproduces human neuroanatomy, or human thought processes. I merely mean any artificially created brain of sorts or brain analogue which somehow (insert your favourite Treknobabble here ― although serious, actual research is being conducted in this field) can produce *non-linear* cognitive processes.

― By *non-linear* cognitive process I mean not the strict sense of non-linear computational mechanics, but rather, that ineffable quality of abstract human thought process which is the opposite of *linear* computational process ― which in turn is the simple execution of strings of command, which necessarily must follow as specified by any specific program or subroutine. Non-linear processes are both the amazing strength and the weakness of the human mind. Unlike linear, slavish processes of computers and programs, the incredible wonder of the brain as defined is its capacity to perform that proverbial “quantum leap”, the inexplicable abstractions, non-linear processes that result in our thoughts, both conscious and subconscious ― and in fact, in us having a mind at all, unlike computers and computer programs. Sadly, it is also that non-linear, erratic and unpredictable nature of brain processes that can cause serious psychological disturbances, madness, or even loss of consciousness of self.

These differences are at the core of the issue, and here I would perhaps seem to agree with William, when he writes: ”I don't think that it's at all obvious that sentience or inner life is tied to biology, but it's not at all obvious that it's wholly separate from it, either. MAYBE at some point neurologists and physicists and biologists and so forth will be able to identify some kind of physical process that clearly demarcates consciousness from the lack of consciousness, not just by modeling and reproducing the functioning of the human brain but in some more fundamental way.”

I agree and again, I would go a bit further: I am actually willing to go so far as to admit the possibility of us one day being able to create an *artificial brain* which can reproduce, to a certain degree, some or many of those processes ― and perhaps even others our own human brains are incapable of. Likewise, I am prepared to admit the possibility of sentient life in other forms than carbon-based humanoid. It is as reflections of those possibilities that I see the Founders, and any number of other such outlandish species in Star Trek. And it is as such that I view Data’s positronic brain ― something that somehow allows him many of the same possibilities of conscious thought that we have, and perhaps even others, as yet undiscovered by him. Again, I would even go so far as not only to admit, but to suppose the very real possibility of two identical artificial brains ― say, two copies of Data’s positronic brain ― *not* behaving exactly alike in spite of being exact copies of each other, in a manner similar to (but of course not identical to) how identical twins’ brains will function differently. This analogy is far from perfect, but it is perhaps the easiest one to understand: thoughts and consciousness are more than the sum of the physical, biological brain and DNA. Artificial consciousness must also be more than the sum of a artificial brain and the programming. As such, I, like the researchers whose views I am merely reflecting, not only expect, but require an artificial brain that in this aspect truly equals the fundamental behaviour of sentient biological brains.

It is here, I believe, that Robert’s last thoughts and mine seem to diverge. Robert seems to believe that Data’s positronic brain is merely a highly advanced computer. If this is the case, I wholly agree with his final assessment.

If not, however, if Data’s brain is a true *artificial brain* as defined, what Robert proposes is wholly unacceptable.


Data’s brain is never established as a true artificial brain. But it is never established a merely highly advanced computer, either. It is once stated, for instance, that his brain is “rated at...” But this means nothing. This is a mere attempt at assessing certain faculties of his capacities, while wholly ignoring others that may as yet be underdeveloped or unexplored. It is in a way similar to saying of a chess player that he is rated at 2450 ELO: it tells you precious little about the man’s capacities outside the realm of chess.

We must therefore clearly understand that brains, including artificial brains, and computers are not the same and don’t work the same way. It is not a matter of orders of magnitude. It is not a matter of speed, or capacity. It is not even a matter of apples and oranges.

I therefore would like to state my Third, Fourth, Fifth and Sixth Sentences:

3. ― A BRAIN IS NOT A COMPUTER, and vice-versa.


6. ― A PROGRAM IS INCAPABLE OF THOUGHT PROCESSES. It merely consists of linear strings of commands.

Here is finally the matter explained: a computer is merely a toaster, a vacuum-cleaner, a dish-washer: it always performs the same routine function. That function is to run various computer programs. And the computer programs ― any program ― will always be incapable of exceeding themselves. And the combination computer+program is incapable of non-linear, abstract thought process.

To simplify: a computer program must *always* obey its programming, EVEN IN SUCH CASES WHEN THE PROGRAMMING FORCES RANDOMIZATION. In such cases, random events ― actions and decisions, for instance ― are still merely a part of that program, within the chosen parametres. They are therefore only apparently random, and only within the specifications of the program or subroutine. An extremely simplified example:

Imagine that in a given situation involving Subroutine 47 and a A/B Action choice, the programming requires that the EMH must:

― 35% of the cases: wait 3-6 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 10-15 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 10% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose RANDOMLY.
― 5% of the cases: wait 60-90 seconds as if considering Actions A and B, then choose RANDOMLY.
― 6% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 10-15 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 3-6 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47

In a situation such as this simple one, any casual long term observer would conclude that the faster the subject/EMH took a decision, the more likely it would be the right one ― something observed in most good professionals. Every now and then, however, even a quick decision might prove to be wrong. Inversely, sometimes the subject might exhibit extreme indecision, considering his options for up to a minute and a half, and then having even chances of success.

A professional observer with the proper means at his disposal, however, and enough time to run a few hundred tests, would notice that this subject never, ever spent 7-9 seconds, or 16-19 seconds before reaching a decision. A careful analysis of the response times given here would show results that could not possibly be random coincidences. If it were “Blade Runner”, Deckard would have no trouble whatsoever in identifying this subject as a Replicant.

We may of course modify the random permutations of sequences, and adjust probabilities and the response times as we wish, in order to give the most accurate impression of realism compared to the specific subroutine: for a doctor, one would expect medical subroutines to be much faster and much more successful than poker and chess subroutines, for example. Someone with no experience in cooking might injure himself in the kitchen; but even professional chefs cut themselves rather often. And of course, no one is an expert at everything. A sufficiently sophisticated program would reflect all such variables, and perfectly mimic the chosen human behaviour. But again, the Turing test is irrelevant:

All this is varying degrees of randomization. None of this is conscious thought: it is merely strings of command to give the impression of doubt, hesitation, failure and success ― in short, to give the impression of humanity.

But it’s all fake. It’s all programmed responses to stimuli.

Now make this model a zillion times more sophisticated, and you have the EMH’s “sentience”: a simple simulation, a computer program unable to exceed its subroutines, run slavishly by a computer unable of any thought processes.

The only way to partially bypass this problem is to introduce FORCED CHAOS: TO RANDOMIZE RANDOMIZATION altogether.

It is highly unlikely, however, that any computer program could long survive operating a true forced chaos generator at the macro-level, as opposed to limited forced chaos to certain, very specific subroutines. One could have forced chaos make the subject hesitate for forty minutes, or two hours, or forever and forfeit the game in a simple position in a game of chess, for example; but a forced chaos decision prompting the doctor to kill his patient with a scalpel would have more serious consequences. And many, many simpler forced chaos outcomes might also have very serious consequences. And what if the forced chaos generator had power over the autoprogramming function? How long would it take before catastrophic failure and cascading systems failure would occur?

And finally, but also importantly: even if the program could somehow survive operating a true forced chaos generator, thus operating extremely erraticly ― which is to say, extremely dangerously, to itself and any systems and people that might depend on it ―, it would still merely be obeying its forced chaos generator ― that is, another piece of strings of command.

So we’re back where we started.

So, to repeat one of my first phrases from a previous comment: “It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.” And the matter is, that the EMH simply *does not think*. The program simulates realistic responses, based on programmed responses to stimuli. That’s all. This is not thought process. This is not having a mind.

So it follows that I don’t agree when Peremensoe writes what Yanks also previously has commented on: "So Doc's mind runs on the ship computer, while Data's runs on his personal computer in his head. This is a physiological difference between them, but not a philosophical one, as far as I can see. The *location* of a being's mind says nothing about its capacity for thought and experience."

The point is that “Doc” doesn’t have a “mind”. There is therefore a deep philosophical divide here. The kind of “mind” the EMH has is one you can simply print on paper ― line by line of programming. That’s all it is. You could, quite literally, print every single line of the EMH programming, and thus literally read everything that it is, and learn and be able to calculate its exact probabilities of response in any given, imaginable situation. You can, quite literally, read the EMH like a book.

Not so with any human. And not so, I argue, with Data. And this is where I see that Robert, in my opinion, misunderstands the question. Robert writes: “Eventually hardware and an OS will come along that's powerful enough to run an emulator that Data could be uploaded into and become a software program”. This only makes sense if you disregard his artificial brain, and the relationship between his original programming and the way it has interacted with, and continues to interact with that brain, ever expanding what Data is ― albeit rather slowly, perhaps as a result of his positronic brain requiring much longer timeframes, but also being able to last much longer than biological brains.

So I’ll say it again: I believe that Data is more than his programming, and his brain. His brain is not just some very advanced computer. Somehow, his data ― sensations and memories ― must be stored and processed in ways we don’t fully understand in that positronic brain of his ― much like the Great Link’s thoughts and memories are stored and processed in ways unknown to us, in that gelatinous state of theirs.

I therefore doubt that Data’s program and brain as such can be extracted and emulated with any satisfactory results, any more than any human’s can. Robert would like to convert Data’s positronic brain into software. But who knows if that is any more possible than converting a human brain into software? Who knows whether Data’s brain, much like our own, can generate thought processes that are inscrutable and inexplicable that surpass its construction?

So while the EMH *program* runs on some *computer*, Data’s *thoughts* somehow flow in his *artificial brain*. This is thus not a matter of location: it’s a matter of essence. We are discussing wholly different things: a program in a computer, and thoughts in a brain. It just doesn’t get much more different. In my opinion, we are qualitatively worlds apart. "
Set Bookmark
Andy's Friend
Sat, Jun 25, 2016, 9:04pm (UTC -6)
Re: TNG S2: The Measure of a Man


You have to go much further. You have to stop talking about artificial intelligence, which is irrelevant, and begin discussing artificial consciousness.

Allow me to copy-paste a couple of my older posts on "Heroes and Demons" (VOY). I recommend the whole discussion there, even Elliott's usual attempts to contadict me (and everyone else; he was the rather contrarian fellow). Do note that "body & brain," as I later explain on that thread, is a stylistic device: it is of course Data's positronic brain that matters.

Fri, Oct 31, 2014, 1:29pm (UTC -5)

"@Elliott, Peremensoe, Robert, Skeptikal, William, and Yanks

Interesting debate, as usual, between some of the most able debaters in here. It would seem that I mostly tend to agree with Robert on this one. I’m not sure, though; my reading may be myopic.

For what it’s worth, here’s my opinion on this most interesting question of "sentience". For the record: Data and the EMH are of course some of my favourite characters of Trek, altough I consider Data to be a considerably more interesting and complex one; the EMH has many good episodes and is wonderfully entertaining ― Picardo does a great job ―, but doesn’t come close to Data otherwise.

I consider Data, but not the EMH, to be sentient.

This has to do with the physical aspect of what is an individual, and sentience. Data has a body. More importantly, Data has a brain. It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.

Peremensoe wrote: ”This is a physiological difference between them, but not a philosophical one, as far as I can see.”

I cannot agree. I’m sure that someday we’ll see machines that can simulate intelligence ― general *artificial intelligence*, or strong AI. But I believe that if we are ever to also achieve true *artificial consciousness* ― what I gather we mean here by ”sentience” ― we need also to create an artificial brain. As Haikonen wrote a decade ago:

”The brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers.”

This is the main difference between Data and the EMH, and why this physiological difference is so important. Data possesess an artificial brain ― artificial neural networks of sorts ―, the EMH does not.

Data’s positronic brain should thus allow him thought processes somehow similar to those of humans that are beyond the EMH’s capabilities. The EMH simply executes Haikonen’s ”programmed strings of commands”.

I don’t claim to be an expert on Soongs positronic brain (is anyone?), and I have no idea about the intricate differences and similarities between it and the human brain (again: does anyone?). But I believe that his artificial brain must somehow allow for some of the same, or similar, thought processes that cause *self-awareness* in humans. Data’s positronic brain is no mere CPU. In spite of his very slow learning curve in some aspects, Data consists of more than his programming.

This again is at the core of the debate. ”Sentience”, as in self-awareness, or *artificial consciousness*, must necessarily imply some sort of non-linear, cognititive processes. Simple *artificial intelligence* ― such as decision-making, adapting and improving, and even the simulation of human behaviour ― must not.

The EMH is a sophisticated program, especially regarding prioritizing and decision-making functions, and even possessing autoprogramming functions allowing him to alter his programming. As far as I remember (correct me if I’m wrong), he doesn’t posses the same self-monitoring and self-maintenance functions that Data ― and any sentient being ― does. Even those, however, might be programmed and simulated. The true matter is the awareness of self. One thing is to simulate autonomous thought; something quite different is actually possessing it. Does the fact that the EMH wonders what to call himself prove that he is sentient?

Data is essentially a child in his understanding of humanity. But he is, in all aspects, a sentient individual. He has a physical body, and a physical brain that processes his thoughts, and he lives with the awareness of being a unique being. Data cannot exist outside his body, or without his positronic brain. If there’s one thing that we learned from the film ”Nemesis”, it’s that it’s his brain, much superior to B-4’s, that makes him what he is. Thanks to his body, and his brain, Data is, in every aspect, an independent individual.

The EMH is not. He has no body, and no brain, but depends ― mainly, but not necessarily ― on the Voyager computer to process his program. But more fundamentally, he depends entirely on that program ― on strings of commands. Unlike Data, he consists of nothing more than the sum of his programming.

The EMH can be rewritten at will, in a manner that Data cannot. He can be relocated at will to any computer system with enough capacity to store and process his program. Data cannot ― when Data transfers his memories to B-4, the latter doesn’t become Data. He can be shaped and modelled and thrown about like a piece of clay. Data cannot. The EMH has, in fact, no true personality or existence.

Because he relies *entirely* on a string of commands, he is, in truth, nothing but that simple execution of commands. Even if his program compels him to mimic human behaviour with extreme precision, that precision merely depends on computational power and lines of programming, not thought process.

Of course, one could argue that the Voyager’s computer *is* the EMH’s brain, and that it is irrelevant that his memories, and his program, can be transferred to any other computer ― even as far as the Alpha Quadrant, as in ”Message in a Bottle” and ”Life Line”.

But that merely further annihilates his individuality. The EMH can, in theory, if the given hardware and power requirements are met, be duplicated at will at any given time, creating several others which might then develop in different ways. However ― unlike say, Will and Thomas Riker, or a copy of Data, or the clone of any true individual ―, these several other EMHs might even be merged again at a later time.

It is even perfectly possible to imagine that several EMHs could be merged, with perhaps the necessary adjustments to the program (deleting certain subroutines any of them might have added independently in the meanwhile, for example), but allowing for multiple memories for certain time periods to be retained. Such is the magic of software.

The EMH is thus not even a true individual, much less sentient. He’s software. Nothing more.

Furthermore, something else and rather important must also be mentioned. Unless our scope is the infinite, that is, God, or the Power Cosmic, to be sentient also means that you can lose that sentience. Humans, for a variety of reasons, can, all by themselves and to various degrees, become demented, or insane, or even vegetative. A computer program cannot.

I’m betting that Data, given his positronic brain, could, given enough time, devolve to something such as B-4 when his brain began to fail. Given enough time (as he clearly evolves much slower than humans, and his positronic brain would presumably last centuries or even millennia before suffering degradation), Data could actually risk losing his sanity, and perhaps his sentience, just like any human.

The EMH cannot. The various attempts in VOY to depict a somewhat deranged EMH, such as ”Darkling”, are all unconvincing, even if interesting or amusing: there should and would always be a set of primary directives and protocols that would override all other programming in cases of internal conflict. Call it the Three Laws, or what you will: such is the very nature of programming. ”Darkling”, and other such instances, is a fraud. It is not the reflex of sentience; it is, at best, the result of inept programming.

So is ”Latent Image”. But symptomatically, what do we see in that episode? Janeway conveniently rewrites the EMH, erasing part of his memory. This is consistent with what we see suggested several times, such as concerning his speech and musical subroutines in ”Virtuoso”. Again, symptomatically, what does Torres tell the EMH in ”Virtuoso”?

― TORRES: “Look, Doc, I don't know anything about this woman or why she doesn't appreciate you, and I may not be an expert on music, but I'm a pretty good engineer. I can expand your musical subroutines all you like. I can even reprogramme you to be a whistling teapot. But, if I do that, it won't be you anymore.”

This is at the core of the nature of the EMH. What is he? A computer program, the sum of lines of programming.

Compare again to Data. Our yellow-eyed android is also the product of incredibly advanced programming. He also is able to write subroutines to add to his nature and his experience; and he can delete those subroutines again. The important difference, however, is that only Soong and Lore can seriously manipulate his behaviour, and then only by triggering Soongs purpose-made devices: the homing device in ”Brothers”, and the emotion chip in ”Descent”. There’s a reason, after all, why Maddox would like to study Data further in ”Measure of a Man”. And this is the difference: Soong is Soong, and Data is Data. But any apt computer programmer could rewrite the EMH as he or she pleased.

(Of course, one could claim than any apt surgeon might be able to lobotomise any human, but that would be equivalent to saying that anyone with a baseball bat might alter the personality of an human. I trust you can see the difference.)

I believe that the EMH, because of this lack of a brain, is incapable of brain activity and complex thought, and thus artificial consciousness. The EMH is by design able to operate from any computer system that meets the minimum requirements, but the program can never be more than the sum of his string of commands. Sentience may be simulated ― it may even be perfectly simulated. But simulated sentience is still a simulation.

I thus believe that the EMH is nothing but an incredibly sophisticated piece of software that mimics sentience, and pretends to wish to grow, and pretends to... and pretends to.... He is, in a way, The Great Pretender. He has no real body, and he has no real mind. As his programming evolves, and the subroutines become ever more complex, the illusion seems increasingly real. But does it ever become more than a simulacrum of sentience?

All this is of course theory; in practical terms, I have no problem admitting that a sufficiently advanced program would be virtually indistinguishable, for most practical purposes, from actual sentience. And therefore, *for most practical purposes*, I would treat the impressive Voyager EMH as an individual. But as much as I am fond of the Doctor, I have a very hard time seeing him as anything but a piece of software, no matter how sophisticated.

So, as you can gather by now, I am not a fan of such thoughts on artificial consciousness that imply that it is all simply a matter of which computations the AI is capable of. A string of commands, however complex, is still nothing but a string of commands. So to conclude: even in a sci-fi context, I side with the ones who believe that artificial consciousness requires some sort of non-linear thought process and brain activity. It requires a physical body and brain of sorts, be it a biological humanoid, a positronic android, the Great Link, the ocean of Solaris, or whatever (I am prepared to discuss non-corporeal entities, but elsewhere).

Finally, I would say that the bio gel idea, as mentioned by Robert, could have been interesting in making the EMH somehow more unique. That could have the further implication that he could not be transferred to a computer without bio gel circuitry, thus further emphasizing some sort of uniqueness, and perhaps providing a plausible explanation for the proverbial ”spark” of consciousness ― which of course would then, as in Data’s case, have been present from the beginning. This would transform the EMH from a piece of software into... perhaps something more, that was interwoven with the ship itself somehow. It could have been interesting ― but then again, it would also have limited the writing for the EMH very severely. Could it have provided enough alternate possibilities to make it worthwhile? I don’t know; but I can understand why the writers chose otherwise"
Set Bookmark
Andy's Friend
Wed, Jun 15, 2016, 1:50pm (UTC -6)
Re: TOS S1: Where No Man Has Gone Before


"Were there ever any references in the TNG/DS9/VOY era to Starfleet exploring outside the galaxy?"

TNG, Season 1, Episode 6: "Where No One Has Gone Before."

One of my favourite episodes. And the title is almost the same as this one ;)
Next ►Page 1 of 7
▲Top of Page | Menu | Copyright © 1994-2020 Jamahl Epsicokhan. All rights reserved. Unauthorized duplication or distribution of any content is prohibited. This site is an independent publication and is not affiliated with or authorized by any entity or company referenced herein. See site policies.