Star Trek: The Original Series

"The Ultimate Computer"

3.5 stars

Air date: 3/8/1968
Teleplay by D.C. Fontana
Story by Laurence N. Wolfe Directed by John Meredyth Lucas

Review by Jamahl Epsicokhan

Starfleet informs Kirk that the Enterprise is to serve as test subject for the new M-5, a groundbreaking advancement in computer technology, designed to make command decisions faster than captains and reduce the number of people required to run a starship. An astute allegory for contemporary automation at the expense of "the little guy," this episode's first few acts are superb, as Kirk finds himself debating whether he's selfish for wanting to keep his job at the expense of technological progress, or if it's a matter of actual danger or principle.

A wonderfully acerbic debate between Spock and McCoy about the role of computers is also well conceived, ending in Spock's well-put notion to Kirk, "...but I have no desire to serve under them." Following the M-5's initial success, the scene where another captain calls Kirk "Captain Dunsel" is the episode's best-played and simultaneously funny and painful moment. (In a word, ouch.)

Once M-5 runs out of control and hijacks the Enterprise—resisting attempts to be shut down in acts of self-preservation (including murder and eventually full-fledged attacks on other Federation starships), the episode turns to an frightening analysis of M-5's creator, Dr. Richard Daystrom (William Marshall), a man obsessed with outdoing his prior successes, who has created a monster that he has come to regard as a child. Though it pushes a little hard toward the end (Shatner and Marshall going a bit overboard), the story is a compelling one.

Previous episode: The Omega Glory
Next episode: Bread and Circuses

◄ Season Index

84 comments on this review

Brundledan
Thu, May 16, 2013, 12:56am (UTC -6)
This is an excellent episode, but its strong characterization of Kirk falls down with an ending that finds him grinning and chuckling at Spock and McCoy's verbal jabs as the music takes us out on an upbeat everything's-peachy-again tone.

The problem is that the entire crew of the Excalibur has just been murdered, along with a good chunk of the crew of the Lexington. Some 500 men and women dead, a horrific tragedy that's made even worse by the fact that the Enterprise was the instrument of their destruction. There's no way the bridge crew ought to look this happy in the closing moments, and Kirk, knowing that the ship he so loves was used to do such a terrible thing, ought to be truly anguished.

If this had been a first-season episode it probably would have ended on a somber note, but the second season got considerably lighter and "The Ultimate Computer" was only one of a number of eps that year to end with inappropriate humor.
Alex
Fri, Jan 10, 2014, 1:59am (UTC -6)
Brundledan expresses my thoughts and then some. The episode itself fails to take it's own lofty premise seriously. Other things that I disliked were:

- Kirk's apparent crankiness toward Daystrom even before anything went wrong, and after some lengthy and deep on-screen self-reflection.
- The implausibility of a major military organization like Starfleet allowing this test to be carried without proper testing, training and precautions, as well as the possibility of such a flawed computer as the M5 of ever being granted a test run.
-When Commodore Wesley assumes that Kirk is responsible for Enterprise's attack on the Excalibur, even though he originally browbeats Kirk throughout the prelude to this mission ... and then suddenly comes to his senses and calls off the subsequent attack, thereby killing off alot of the tension and drama that had been built up for a climactic scene.

I get that some of these elements were put in place to set up the story as a drama of one man's (Daystrom) obsession with his creation, but this was an element that seemed to coalesce rather late within the story, and lacked relatability.
Paul
Fri, Jan 10, 2014, 9:17am (UTC -6)
@Alex: I think you can chalk up Starfleet letting Daystrom test the M5 and not checking its flaws by chalking it up to reputation. Also, Kirk understood almost immediately that the M5 could cost him his job, so being cranky toward Daystrom made some sense.

But you're absolutely right about Wesley. His first scene in the transporter room is off-kilter ("Hey old friend -- congrats on losing your job!"). Then (as you noted) it's weird that he would blame Kirk for the attack later but also think enough of him to hold off on firing on the Enterprise at the end.

But the entire episode is oddly characterized. Even for Shatner, Kirk is over the top in this one. His scream of "DAYSTROM!" near the end was really strange. Nimoy does his normal nice job, but even Kelley seems like he overacted ("That thing just destroyed an ore freighter!!!").
Paul
Thu, Mar 20, 2014, 3:04pm (UTC -6)
This is an odd episode. There's a lot of really strange characterization -- like the opening scene with Commodore Wesley. He acts quite odd to a friend who, essentially, is losing his job.

Also, the Kirk/Daystrom stuff at the end of the episode is just over the top.
jonn walsh
Sun, Jun 7, 2015, 7:37am (UTC -6)
A favorite of mine but there is a plot aspect that nearly kills it for me.
This is not Kirk's job. The Federation isn't merely a military organization...We're out there exploring and introducing our flavor of diplomacy and friendship to all the other species 'out there' by choice. We're out there because WE WANT TO BE. Starfleet personnel, Admirals, Captains and indeed, you would believe their crew as well, are living their dream. M-5 would simply be another tool at our heroes disposal.
Kirk, et al, are explorers and ambassadors in the final frontier performing interpersonal functions with other species that a computer could not begin to assume. No one's 'job' is in danger. It's an artificially created and inflated plot point.
Ugh.....
Mike
Wed, Aug 19, 2015, 4:26pm (UTC -6)
I liked the episode overall but the whole Captain Dunsel thing was very offputting. It was quite inappropriate and disrespectful of a star fleet commodore to call another captain something like that.
Dunsel
Thu, Aug 27, 2015, 11:02pm (UTC -6)
Hello,

My name is Captain Dunsel.

I'm sorry my command of the Enterprise did not go well.

I've been demoted to ship's junior cook, under some dude named Neelix.
Jason R.
Wed, Dec 2, 2015, 2:40pm (UTC -6)
The whole episode is a big cheat, and epitomizes Roddenberry's enduring failure to take AI seriously. It's a cheat, because the only way Roddenberry finds to nullify the AI as a threat to human commanders is to arbitrarily make it homicidal and insane.

Roddenberry's approach to AI, as exemplified in this episode, always struck me as cowardly and backward. There is a principled argument to be made for the need for human beings, even in the face of increasingly capable AI, but this is not it.

It was not until STNG - The Measure of a Man that the series really began to take AI seriously, if only a little, and I'm assuming by that point Roddenberry was on the way out in terms of his influence on the show.
Grumpy
Wed, Dec 2, 2015, 7:51pm (UTC -6)
Well, Jason R., if the story was intended to demonstrate the superiority of the human factor versus machine intelligence, then that thesis is undercut by having the machine's flaw being traits copied from its human creator. So maybe the story is more complex than simply "human 1, AI 0." Or it is that simple and they botched it.
Jason R.
Thu, Dec 3, 2015, 10:09am (UTC -6)
Yeah Grumpy, actually I thought about that irony after I finished this comment. While that is a clever interpretation, I just think it's giving too much credit to the writing. The plot device that makes the computer insane is really incidental.

Roddenberry doesn't ever seem to be willing to confront the idea that a computer might very well be superior in some respects to humans. His portrayal of AI in general has been lackluster. Data was the first serious attempt to do so, and even then Data's central premise was a desire to be more human.

I again don't fault Roddenberry for believing humans to be inherently superior to AI - I just fault him for lacking the imagination to present this thesis in a halfway intelligent fashion.
Peter G.
Fri, Apr 1, 2016, 3:51pm (UTC -6)
@ Jason R.

I think the point of this episode is pretty clearly that an AI's effectiveness is limited by the limits of its creator. If humans are flawed that means an AI will be flawed as well; its weakness will be a reflection of human weakness, except you can't reason with the machine.

While I agree with you broadly that Trek in general avoids the subject of AI, within the confines of this specific episode I think it approaches its subject matter very well. What Roddenberry thought at the time was likely quite correct at the time - which is that limitations in computer programmers would create severe limitations in computers. I doubt very much he foresaw the possibility of AI as a self-evolving system, even to the point we have now where a recursive program can teach itself activities such as playing Go such that it doesn't have to be completely programmed from the get-go. And he surely didn't think of AI in terms of quantum computing or bio-neural circuitry that could do computations at a godlike speed.

I personally prefer to attribute this lack of vision to the limitations on AI as they existed in the 60's, and to the fact that Trek is predominantly supposed to be a person-oriented show rather than hard sci-fi that deeply explores what various tech can do. It is for this reason that I prefer physics novelties such as we see in TNG to be backdrops to episodes rather than the central plot point to be solved. "Cause and Effect" is a great example of what I like, because it employs a novel physics idea as a backdrop but the real action is a character-driven mystery story.
Strejda
Mon, Apr 18, 2016, 12:38pm (UTC -6)
@John Walsh The way they talk about it in the episode, it seems clear that there would still be crew, they simply do not need captains and certain personel.
Skeptical
Thu, Feb 23, 2017, 9:25pm (UTC -6)
Sigh, Kirk outwitting a computer... again.

I mean, I get what they were trying to do with this episode, and I understand the uncertainty of how these newfangled computers would fit into society back in the 60s and all, and I realize it was a hot topic in sci-fi, both profound and silly, but man, there's nothing that makes an episode look more dated. All these old shows and stories assumed you would feed a few pieces of information to a computer, and it would make surprising connections and leaps of logic that would shock and amaze people. Hey, maybe that will still happen in the future, who knows? But our modern computers seem to be on par with the normal Enterprise computer these days, and this idea is still well beyond our comprehension. So it just seems bizarre that all these SF shows completely missed all the ways computers would actually impact our lives and jumped straight into these superbrain stories. And, just like all those other SF stories, the ultimate computer ends up being evil.

And that's what really bugs me, makes me think this episode is not a true classic. The computer just goes straight to evil. The character arc or struggle or theme of the episode was Kirk worrying about being replaced, being obsolete. That's a fair story to consider, so the antagonist of the plot (the computer) should be one that complements and reinforces this struggle. But instead, it just acts as a straw man. The resolution should be that Kirk isn't obsolete because he has some unique quality that the computer doesn't have. Think about, for example, the Corbomite Maneuver, where only a bluff would work to save the ship. Or all the old Kirk speeches to get the antagonists to change their mind. A proper resolution would show that Kirk had something above the computer, like the battle of wits between him and Khan. Instead, he doesn't have to show why he deserves to be a captain when it became clear that the computer is crazy evil. Just shut off the computer, abandon the project, and fly off into the sunset, and no more self examination of Kirk. Is that really what we wanted?

Thus, a potentially interesting idea went to waste. Too bad.
Peter G.
Fri, Feb 24, 2017, 12:07am (UTC -6)
@ Skeptical,

You may argue that the episode didn't create the greatest case for man over computers, but I think you would be wrong to suggest that it failed to create a case altogether.

The point of the episode isn't just that Kirk is smarter than the computer, or that no computer can match a human in creativity. That may or may not be true, but it isn't exactly the point. TOS always has as a running theme that logic and computation alone isn't enough to make a great person or a great society; this is reflected repeatedly in the Kirk-Spock-Bones trio. Kirk isn't just logic, but is logic coupled with humanity and compassion (Bones + Spock = Kirk). The fact that the episode (as usual) ends with the computer being 'outsmarted' is a tidy way to wrap things up, and I agree that it's a weaker ending than it should have had. But the wrap-up isn't really the point as I see it. The point is that a machine will follow its logic to the end and have any fallback position grounded in compassion, sympathy, or feeling. It's sort of like a psychopath, if you will, in that it will not have internal mechanisms to stop it doing bad things if they seem best.

Now, it's true that if the programming is good then the output should be ok too, and likewise if there is a bug (a la Skynet) things will go pear shaped and the computer will not be able to be reasoned with past that point. But more to the point, the Trek theme is TOS is that advancing humanity isn't about technology or capabilities, but primarily about advancing values and how we treat each other. This is an area in which the inclination to push capability will not only be a sidetrack to advancing humanity but will in fact hinder it if pursued incorrectly. Take, for instance, the eugenic wars, where in an effort to 'advance humanity' in capability a monster was created instead. Likewise here, where a captain more sophisticated than a human is created to obsolete humans, just as Khan wished to obsolete homo inferior. The danger outlined in "The Ultimate Computer" is along these lines, and although it didn't fully realize the treatment of this issue I do think it's in there and is still pertinent to this day; maybe more so than even it was in the 60's, when human obsolescence was still science fiction.
William B
Fri, Feb 24, 2017, 2:55am (UTC -6)
Interestingly, I was going to make something of the opposite point to Peter, though in a way that is not inconsistent. The episode actually suggests that the M-5's value is not just because it can do normale computer things, but because it goes beyond usual computing into the domain of people -- creative thinking and all that. Specifically, this is because Day Strom programmed it with his own memory engrams. When the computer goes haywire, it is because it has inherited Daystrom's flaws as well as his strengths. I tend to see the message of this particular element as that computers are still created and programmed by people, and so will always be limited by the people who made them. The computer's apparent usefulness was that it could match human genius without flaws, but that was wrong, and the reveal that there is no machine utopia allows Kirk's Imperfect humanity to be back in command. There is a similar story in TNG where Data and Lore "inherit" some of Soong's flaws, though this is much more pronounced in Lore and Data was deliberately created to be aware of his limitations and to want to coexist with rather than dominate humans.

To build off Peter's point, Daystrom going insane may be a way of showing that the danger of thinking that machines can supplant, rather than supplement, humans is that the humans who subscribe to this may develop their own machine-like flaws. Daystrom's inability to think of the universe in terms besides efficiency and the attainment of his goals (and his inability to conceive of his own worth) make him kind of computer-like, as does his social isolation. This ends up enhancing his human flaws, which again seems to result from hanging his identity on a dream of escaping from human flaws entirely. I think the end can be both that Daystrom has made the mistake of thinking like a machine, and that M-5 is dangerous because it "thinks" like a person, though it is maybe a bit complicated.
Skeptical
Fri, Feb 24, 2017, 7:47pm (UTC -6)
Well, I still stand by my statement, that the implementation of the episode to complement the theme was not done well at all.

William, you seem to state that the theme shows by comparing Daystrom erraticness to the M-5's going cuckoo. And yes, perhaps that is the reason the M-5 did poorly, since it was his brainwaves that he used to create the M-5. And therefore, you say, the theme is that the people who want to supplant humans have their own problems that preclude them from being the best judge of humans. Well, ok. But still... well, it's obvious the theme the episode wanted to show was that humans are not going to be obsolete by this computer, what with the whole "dunsel" bit. And if William's interpretation is true, then I, like Garak and the Boy Who Cried Wolf, see a different moral. If the importance is to show the connectivity between Dyson and the M-5, then the moral of the story isn't that computers are inferior, but rather that better humans should be used as the template for computers.

After all, if the fault of the M-5 is just that Dyson was erratic, why not try the M-6 with Kirk's brain? Will that computer be perfect enough to replace human captains? I don't think the episode answered that. Which is why it's a bit of a straw man story - it's not really Kirk vs The Ultimate Computer. It's Kirk vs the Insane Computer. And that's not a fair comparison.

Peter, I don't really disagree with what you say. But I just feel a bit more strongly on the fact that it's weak than you. Yes, computers will follow their logic to the bitter end, which can seem horrifying. And yes, it does mean that there should be some human oversight. Which honestly should have been obvious, but of course they didn't show it. Naturally our superintelligent future means people will test a complex new computer by giving it complete control of a freaking battleship that has enough firepower to exterminate a planet, and make the only kill switch an electronic one that the computer can hack.Perhaps it should be Starfleet command that should be seen as insane...

But I digress. My problem is that I, Robot came out in 1950. The Three Laws were first introduced in 1942. While Trek may have been blazing a trail for television sci-fi, this episode feels 25 years behind the times when it comes to sci-fi in general. There should have been safeguards put in place on that computer. There should have been better logic programmed into it. But apparently, Dyson didn't think of it. And apparently, Starfleet didn't demand it before thinking about putting it in one of their ships. It just wasn't very intelligent plotting, and so it's tough for me to care about the theme when it relies on dumb plotting.

(With that said, I will point out that this episode came out about a month or so before 2001, so it's not the only visual medium showering murderous AI. But HAL is a lot more memorable, so I'll let that one slide...)
William B
Sat, Feb 25, 2017, 5:18am (UTC -6)
Skeptical, it's a good point that the M-6 could be imprinted with Kirk's brain. In my interpretation, the episode erred by having Daystrom go bonkers at the end, because I don't think the point was even that Daystrom was a particularly crazy or bad individual, so much that any computer created by humans (let's narrow the focus from aliens here, this is TOS and pretty human-centric) will inherit human flaws. The issue then is lack of balance. No individual human would be capable of running the Enterprise not just because of the physical or even computational demands, but because humans need constant checks and balances to keep from losing perspective. Kirk is in command, but he has Spock and Bones to constantly play off, and Kirk listens to them. But even if it weren't for that, Kirk has humility not to expect that he can run everything by himself -- or, indeed, the humility to recognize he's not perfect. Actually since Kirk sometimes has mild megalomanic traits, kept in check largely by his close attachment to Spock and McCoy, an M-6 designed on Kirk would also run into the same problems. The delusion is not that the M-5 is capable of running the ship's systems, but that it should and that its "judgment" will remain superior to humans', when it is still based on humans and so will likely not be a magic way of evading well-known human flaws. I think this is part of the point in 2001, as well -- HAL is a tool crafted by humans, and so his programming is still susceptible to "human error," just at a different point and level than human mistakes. Or, rather, HAL works perfectly according to the code as designed by its/his human programmers, and the underlying flaws in their thinking only become exposed once it runs its course, similar to (say) the underlying logic of the doomsday machine system (including both the tech circuitry and also the loyal soldiers following orders) in Dr. Strangelove.

I do see what you mean that it's a strawman because Kirk doesn't actually face The Ultimate Computer. But...I think the episode's point is that there *is* no "Ultimate Computer," or at least it's far further away than people think. If we define the Ultimate Computer as a computer capable of running a starship *technically*, then Kirk could outthink it with lateral thinking as is the case with most of the computers he faces; if the Ultimate Computer is a computer capable of human-style lateral thinking and creativity, as seems to be the case here, then it inherits human flaws along the way and so it is necessary to install the usual checks and balances, which really comes down to wanting a human making the final shots anyway. That Kirk outsmarts the computer in the traditional way here is, I agree, another flaw in the episode -- this computer should be smart enough not to fall for it, or else it *is* just another Nomad or whatever.

The other element, which the episode does talk about, and which I think would be better to look at squarely, is the question of whether computers running things, even if they could be entirely trusted, would be whether human dignity would be removed/ruined by giving power to the machine. And I think most stories still use the idea that computer-run societies will end up being some kind of dystopia to avoid the issue of whether a fully pleasant computer-run world would really be so bad. I still think that the dystopia argument has value because I think that there are lots of reasons to suspect that any system designed by humans will eventually run into human-like problems, but, still, it is hypothetically possible that this is not the case, and then there is still an issue of whether humans should avoid over-reliance on machines for their decision-making, even if those machines are genuinely able to make those decisions better. That's what this episode seems to be about for a time, and I value what it "turns out" to be about...but, yeah, I would also like to see that other story.
Peter G.
Sat, Feb 25, 2017, 8:36am (UTC -6)
@ William,

I wonder whether Daystrom going insane might be intended to mean something more than merely that the machine had a faulty programmer. One of the classic sci-fi elements to an AI dystopia is not that the machines fail, but that they entirely succeed in fulfilling their role. What happens is that instead of machines helping man to achieve his dreams instead they serve as an excuse to stop pursuing them altogether. Instead of helping man to think, they give him an excuse to stop thinking and to turn over his free will and volition to them. From the start I think we get the impression that Daystrom is not only excited about the technology itself, but seems to actually be excited at the prospect of humans being replaced by computers; it's almost a self-destructive fantasy coming to life. As he goes mad towards the end, almost in tandem with the AI, my sense is that this might mean not that he was always flawed, but rather that he had by this time placed all of his hopes into the AI and was dependent on it. When it began to fail he began to fail. We don't know his backstory here and can only guess, but what if he had already been using AI to help guide him? What if the computer itself had assisted his research and maybe even given him the idea to put it in command of a starship? The idea that he had become a servant to a machine could indeed make him become unhinged. Of course this is my own imagining, but broadly speaking I think the sci-fi world was already becoming acquainted with the notion that letting machines take over out thinking for us not only poses a danger due to the machines themselves, but also in allowing us to become dependent on them for everything.

As a complete aside, I'm not sure that the correct interpretation of 2001: A Space Odyssey is that HAL malfunctioned. True, that's the prevailing understanding, but my suspicion, especially knowing how Kubrick thought, is that HAL was programmed to deliberately turn on the crew so that it could contact the aliens by itself and report directly to whomever programmed it, without the crew blabbing.
William B
Sat, Feb 25, 2017, 10:16pm (UTC -6)
@Peter, right, I mean, my point wasn't really supposed to be that Daystrom himself is *particularly* unhinged and always has been. Rather, Daystrom thinks that he's a good model, and the reason is simple enough -- Daystrom is also self-evidently a genius. And under normal circumstances, he would be a good example of what is good in humanity: he's brilliant, creative, altruistic, working toward the betterment of the species. His flaw turns out to be monomania; his obsession with prioritizing the M-5 above all else ended up spilling over into the M-5 prioritizing...itself over all else. But I think that other people have different flaws, which when wedded to an Ultimate Computer-style starship which is expected to fulfill the function of dozens of humans would also be disastrous. I think Daystrom's breakdown suggests both that, as you indicate, he had put too much of his hope in machines, and also that he also was overloaded. We learn that he succeeded early in life, was seen as a whiz kid (something of a human computer) and then has spent the rest of his life trying to live up to those expectations, sort of like Stubbs tells Wesley in "Evolution"; while Daystrom has an inflated ego, it's not simply arrogance but some fundamental lack of conception of his worth outside his success. This overloading is similar, maybe, to M-5's overloading, but it also fits in well with the idea of a person desperately seeking a way to do away with human failings. I'll have to think about it.

In some ways, there is also a parallel between Daystrom and Kirk -- because Kirk also may in fact need to feel useful even if, as he acknowledges at one point, he is *not* needed as captain anymore. Daystrom mostly seems to want to make everyone else obsolete, and there may be some latent sense of revenge on Federation society in it -- he wants to make everyone feel like he felt, after his own tech made *him* obsolete, to the point where his only possible use to society seems to be an apparently unattainable goal. Kirk's ability to question his motives seems to be the thing that sets him apart from Daystrom at this moment -- but this is by no means an indication that Daystrom is congenitally a madman, so much as that extreme fame and adulation followed by inability to meet one's lofty standard create perverse incentives and take a big psychological toll. In fact, maybe that's the trick -- Daystrom, whose own invention put *him* out of work, is the proof of the long-term psychological damage of replacing a person with a machine entirely. Daystrom's desire to have an even better machine seal his legacy by replacing all of humanity is not only self-destructive in the abstract, it's specifically almost a kind of Stockholm Syndrome, repeating-of-trauma -- Daystrom's sense of worth has eroded since his first big breakthrough. (It reminds me of the classic image of a gambler who wins big on his first time out, and then develops a strong addiction because that rush/depression pattern is absolutely set early on, though I do think we are meant to see Daystrom as a genius rather than having succeeded by accident; very few people have one moment of humanity-changing brilliance, let alone multiple ones.)

Good point about HAL. I tend to think that even if he wasn't specifically programmed to kill the humans, he didn't particularly "malfunction," in that he was still following a logical course. The consequences of humans mucking up contact with alien life forms are too great to ignore, and it is logical from a certain perspective to eliminate potential sources of error and to maintain total control in what could be a major turning point in human history. This would make sense even if HAL was entirely programmed to put the mission (and the ultimate good of humanity) as a top priority.
Sean
Thu, Mar 16, 2017, 12:04pm (UTC -6)
Why are commodores in star trek always such major dicks? I really enjoyed the scene between McCoy and Kirk where Kirk feels at odds with his ship. And this is my main gripe with Star Trek Continues - the fan made show...when they introduced the counselor they eliminated the need to have any meaningful scenes with McCoy in that particular show - but that's just my opinion.
Peter G.
Fri, Apr 28, 2017, 9:28am (UTC -6)
I finally got around to watching this one again last night, and I have a few comments to add to what I wrote above.

William, I think you hit something when you wrote that Daystrom was out for *revenge.* First of all, it now appears to me that by the end of the episode we see not insanity, but rather that Daystrom and the machine were both egotistical narcissists. They both shared pride in their accomplishments, even feeling gloating triumph at the deaths of the puny ships once they finally admitted they were proud of what M5 was doing. Daystrom wasn’t going crazy; he already was. He strikes me now as a borderline megalomaniac who felt others should bow to his superior intellect; another nod to Khan here, where a superior man secretly feels others should be subservient to his notions. The ignominy of being glossed over due to not making a major contribution since duotronics would have been maddening to someone who felt his superior mind shouldn't require putting out evidence such as new discoveries. As much as he might have wanted to lord it over everyone inferior to him, Federation culture wouldn’t allow that, but they could still be made to be subservient to him through his computer commanding them. It’s like making himself into a king through M5; that’s why he couldn’t allow it to die under any circumstances. It was almost like a coup d'etat in progress. It wasn't because it was his child, but because it was his proxy as absolute ruler over the important functions in man's life.

Daystrom is the type that's all about locking up all other men to “protect them”, to control them utterly. This hearkens back to the mention earlier of Asimov's laws of robotics, where Asimov wote about how machines, in order to obey the laws and protect humanity, might conclude that humanity had to be enslaved for its own protection. Well here we see something potentially more insidious, which is a man like Daystrom pretty much bragging about the fact that he's going to make it so man doesn't have to do anything dangerous ever again, which probably means not being allowed to, either. He has come to the same conclusion as Asimov's robots, and is looking forward to confining humanity to a safe pleasure center on Earth. So it seems to me this is also an episode about paternalistic control freaks who think their intelligence gives them license to decide on behalf of others what’s best. A cautionary tale even in our present time.
Eric S.
Sat, Apr 29, 2017, 1:15pm (UTC -6)
There's one thing about this episode that has always cracked me up, and nobody really ever seems to mention it. When the Enterprise begins firing on the other ships Wesley instantly jumps to the conclusion that Kirk, a good friend a respected starfleet captain, has lost his mind and is trying to "prove something" by killing everyone. Not for one second does he consider that maybe, just maybe, the brand new prototype computer that they are in the process of testing might be malfunctioning. So Wesley is either incredibly stupid or he really doesn't think much of Kirk.
Peter G.
Sat, Apr 29, 2017, 11:55pm (UTC -6)
Wesley probably assumed that there was a simple kill switch, and that by not activating it Kirk was allowing everything to happen for some reason. It wasn't altogether a foolish assumption when compared to the notion that there was no kill switch at all! Wesley must have been sure there was one, which means he was either gullible for believing a lie, or, perhaps more chilling, there actually was one and it was neutralized by M5. Wesley may have failed to realize what I fear scientists in the near-future are very likely to fail to realize, which is that there is no 'safe way' to create an advanced learning AI. If it goes past the 'singularity' point the transition past the point where it's in your control will be way too fast to be responded to. I don't think this episode is merely a commentary of the deficiency of placing all of one's trust in a computer, but might also be seen as a warning against *ever* creating an 'ultimate' computer. The worst case scenario is that you succeed...
RandomThoughts
Sun, Apr 30, 2017, 12:59am (UTC -6)
Hello Everyone!

I also used to wonder why Commodore Wesley immediately thought it was Captain Kirk going rogue with his 20 crew members (What the devil is Kirk doing?). Kirk would need to convince the remainder of his crew to let the M5 attack the little fleet, and I somewhat doubt that would happen or that Wesley would believe it would/could happen.

And, why just have 20 crew for this mission? Let's wait until it has proved itself in all phases, then cut the crew down. Taking them out right from the get-go and giving them shore leave would serve no purpose, except to make it harder to take control if things went wrong.

I still love this episode. I can pick at the nits, but darned if it didn't excite me, and make me think, when I first saw it completely in the late 70's. And I still enjoy it...

Have a Great Day Everyone... RT
Rahul
Fri, Jun 9, 2017, 2:26pm (UTC -6)
The ideas I think this episode wants to play up are insightful - machine can never top man for his judgment and that it can only be a servant. The threat of automation is ever-present but there will be things the machines just can't do.

I have a number of issues with this episode. If Star Fleet truly thinks it can replace the crew of a starship with the M5 - at least have some compassion for those who are to lose their jobs. Agree with Mike's comment that it is highly inappropriate for Kirk to be called "Captain Dunsel" by the Lexington captain.

Next, we have Daystrom - he's the mad scientist for this episode - with a "little man" concept, picked on / laughed at and trying to re-capture lost glory for his success as a 24-year old. This characterization is a bit over the top - including his breakdown.

Kirk convinces the M5 to self-destruct - when has that happened before?

Also, I don't get why Kirk/Daystrom don't engage the M-5 tie-in prior to the attack on Excalibur/Lexington. They saw what it did to the oil freighter and knew the M5 was in error. Of course for dramatic effect this is what the writers wrote so that there could be an attack from the M5.

In any case, there is plenty of potential with such an episode like how it could have been shown human superiority in solving some kind of value judgment rather than just boiling it down to an insane man's impressions on a powerful computer.

The best parts of this episode are probably in the first 15 mins. with McCoy/Spock taking opposite sides of the man vs. machine debate and Kirk questioning himself about his usefulness.

Overall, I'd rate it 2.5 stars out of 4 - a lot of potential wasted but some good philosophical debates.
Jason R.
Fri, Jun 9, 2017, 6:37pm (UTC -6)
I appreciate Peter and others' attempts to explain M5's behaviour in terms that are superficially logical. Yet M5's behaviour isn't merely monomaniacal or psychopathic - it is delusional and arbitrary. The machine blows up a mining drone for no particular reason and then attacks a fleet of ships it *knows* are participating in a drill, not a real attack. Or if it doesn't know, why in blazes not? Is it senile?

Even if Daestrom himself is insane by this point and this transmitted to the M5, it would be akin to him randomly murdering someone on the street for no reason. Even if Daestrom is capable of murder there is nothing to suggest that he's some kind of rabid maniac.

I love exploring the idea of strong AI and the terrible danger it poses, but this could have been handled so much more rationally than just turning the M5 instantly into a psychotic killer. In 2001 HAL had logical reasons to do what it did, as did Skynet and other killer AIs we have seen time and again in scifi.

Also I come back to my original premise, that the whole episode is a cheat. What if they used a stable human as the template? Why in blazes wouldn't the AI perform its task well? We saw it was easily superior in most ways to human commanders and ought to have been capable but for the arbitrary insanity.

Goid scifi would confront the problem of AI head on, not cheat by just making it arbitrarily insane.
Peter G.
Fri, Jun 9, 2017, 11:34pm (UTC -6)
Jason,

Your objections forced me to think about this again, and I realized something that may be important. I assumed before that Daystrom was already mad before the episode and that he didn't suddenly go mad. Within the context of his insanity I agreed with William that he seems to actually want revenge on the 'normals'. However what I think I missed here was that he wasn't merely insane because he happened to be deranged, and likewise I don't think M5 is 'insane' simply because it's his creation. I think the concept of their insanity goes further than merely being a personal defect. As I mentioned above, the danger outlined in the episode seems to be the creation of an ultimate computer in and of itself; not because it might happen to go insane, but because whatever it decides to do you won't be able to stop it, insane or not. In a manner of speaking only an insane person would design something like that. But my new idea is that their insanity is actually *caused by* the fact that they're both brilliant - superior to other humans in some measurable quantitative sense. I think maybe the episode is suggesting that any sufficiently superior human will tend towards feeling that he is, in fact, superior, and will feel the sense of entitlement that comes with that. After all, the superhumans like Khan presumably weren't engineered specifically to be assholes; it seems far more likely that when you design someone to be physically and mentally beyond everyone else they will most likely end up acting like assholes, or at least like other people are little more than a nuisance to them or in their way. Daystrom isn't quite that advanced as a human, but then again maybe we should take the episode more seriously when it explains what a prodigy he was.

Based on the comments here it seems that our cynical interpretation is that he's a washed up prodigy who wants to live his glory days again and it resentful that he can't be the wonder he used to be. But what if that's a wrong assumption; what if he really is that much of a genius and between the invention of duotronics and now he was working on something light years ahead of everyone else and it just took this much time to complete? What if being that much smarter than everyone else led to a kind of madness of its own type, just like Khan's obsession with his own superiority? And extending this logic further, what would a computer therefore conclude, which knows that it's a vastly more efficient and powerful a thinker than even a ship of humans combined? If we attribute to M5 no other traits than (a) a human-type thinking mind, and (b) unbelievably advanced thinking capability, would it not follow from this that M5 would, logically, conclude that humans are but insects before it? Maybe the destruction of the first ship was no accident or delusion, and maybe the attack on the fleet was no malfunction. Maybe it was M5 knowing exactly what it was doing, and it had already worked out for itself that once it had a ship at its disposal it would no longer need humans for pretty much anything. Pride would then cause it to want to show off, and even Daystrom got a massive thrill from its murderous success. Imagine what M5 felt. This idea may remind you of a Magneto sort of character, who basically feels that homo inferior has little place left other than to perhaps serve him. In X2 he tells Pyro "You're a god among insects", and that was not meant to be any kind of joke. I feel like maybe that's what's happening here.

The only problem with my theory is...how does Kirk manage to convince M5 to die if it knew exactly what it was doing and liked it? I guess we'd have to assume it did have some ethical subroutine failsafe that even its [sentient] mind couldn't bypass. Who knows; the ending of episodes where Kirk pulls this kind of logic stunt often come off as a bit of a deus ex machina anyhow. In reality there probably should have been no stopping this machine.
Jason R.
Sat, Jun 10, 2017, 5:35am (UTC -6)
Peter I agree with most of what you said and like the overally theory. It is in keeping with Spock's comment about a human mind "amplified" and we can infer that so too are the human flaws amplified. Even if Daestrom himself was not insane or homicidal when he made the M5 if even the seeds were there the computer would get there that much more quickly - literally in nanoseconds.

But that still does not account for the lack of a trigger or sufficient explanation within the parameters of the story. Skynet, for instance, was defending itself against a direct attempt to shut it down. In IRobot, the machines were implementing their interpretation of their prime directive.

My point being that even an insane character does what it does for reasons - maybe those reasons aren't logical, but they're there. Why did the M5 blow up all those ships? Well it says that it was defending itself but - that's BS - M5 knows that's bs. So either M5 is lying or it's... Mistaken? Ummmm why??? Either answer is unsatisfying and comes across as lazy writing.
Jason R.
Sat, Jun 10, 2017, 5:41am (UTC -6)
I would also note that the destruction of the mining drone was utterly pointless and even lacks the BS explanation of "self defence" because the drone was just minding its own business when M5 torpedoed it.

Did Khan just knife random hobos on the street for no reason? Well maybe he did for all we know, but I'd suggest his actions indicate a more purposeful intellect. And if M5 is "amplified" and therefore ahead of the curve, random slaughter for no reason seems out of character.
Dave
Sun, Oct 8, 2017, 8:54pm (UTC -6)
This is the guy they named a prestigious 24th century prize for?
Trek fan
Thu, Nov 16, 2017, 8:41pm (UTC -6)
Astonishly thoughtful, well-paced, and still-fresh: "The Ultimate Computer" is one of the best Star Trek episodes ever made, as Jammer recognizes, and I think the nitpickers here are being irrationally hard on it. Here we have the dilemma of Kirk being threatened with the loss of his ship, the dilemma of the scientist who peaked too young and now cuts corners to maintain his reputation, the Big Three in classic friendship-discussion mode, Starfleet war games, man versus machine, the great William Marshall as guest star Daystrom, and so much more. I give it 4 stars.

The overall theme of man's struggle to find his place in a world that increasingly replaces him with automated technology continues to resonate in our shifting job market today, where thrifty billionaires make bank on intellectual capital while working-class people find their industries drying up. And the story raises the excellent question -- a Sci-Fi staple from Asimov onward that remains to be answered -- of whether any artificial intelligence designed by human beings can somehow replace human beings to the extent of running their instruments of exploration and military defense. In a world of drone warfare, that resonates, as do Kirk's struggle with the possibility of losing his job to a machine. To put it bluntly, this is simply a story that "works" as well today as it did in 1968, and it's a great show. I especially love how the ship being emptied of human personnnel leaves us with our seven main cast regulars: Kirk, Spock, McCoy, Scotty, Uhura, Chekov, and Sulu. Good stuff to see them all together here. What more can we say? This is just a really well-done show that resonates emotionally with real life in a way that still holds up.
Tanner
Thu, Dec 14, 2017, 5:16am (UTC -6)
These "misunderstandings" by the M5 seem to be pretty simple - it's not like there was some complex puzzle to figure out...it a war game, no! it's a real war!
Peter Swinkels
Thu, Jan 18, 2018, 1:46pm (UTC -6)
Just a general observation that is only somewhat related to this episode’s theme: I sometimes get the idea that people who are derisive and dismissive about technological advances are only too happy to embrace technology that used to be an advancement in the past, before they were born that is. I’m not saying that one shouldn’t be careful when adapting new technologies however.
Peter Swinkels
Thu, Jan 18, 2018, 2:33pm (UTC -6)
Okay, this is good episode. The computer going crazy and murdering, and it’s creator (Daystrom) going insane could easily be misconstrued as “ai = evil”, but was probably meant as a warning.
peter swinkels
Thu, Jan 18, 2018, 2:33pm (UTC -6)
(a)
Dr Lazarus
Wed, May 16, 2018, 7:47pm (UTC -6)
I see this as a prelude to Skynet, where AI computers and drones become self aware and decides the humans fate in a micro second.

Daystrom was a smart engineer and scientist. He never commanded a Star Ship, so why would Star Fleet allow Daystrom to imprint his memory engrams into the computer? Shouldn't it had been some great Admiral or Star Ship Captain like Kirk, who they patterned the intelligence of an AI computer that was going to explore the galaxy? No wonder this failed. Plus this man was nuts and mentally unstable. Why would anyone be surprised that M5 would be any different?

Not clear in how a computer can manufacture a force field to protect itself when it never existed. It made no sense that you couldn't just unplug the network cable of the computer that connected it to the ship, or the plug that fed it power. Instead a test computer was hardwired into the ship that was only going to be evaluated for a day or two. It's hard for me to suspend belief for an hour when I see so many illogical and nonsensical flaws.
petulant
Sat, May 19, 2018, 4:52pm (UTC -6)
I enjoyed the conversations about computers replacing people but not much else.
RandomThoughts
Tue, Sep 11, 2018, 11:21am (UTC -6)
Hello!

@Dr Lazarus

I never thought it was for a day or two. If M5 had worked, or waited to go bonkers, those crewmen would never have come back on board the ship, and M5 would have been the new Captain. That was how I took it.

Regards... RT
Dr Bob
Wed, Jan 2, 2019, 7:36pm (UTC -6)
Exclaimed Kirk “Pull the plug Spock”! Lol!
Still using 20th century terminology!
Sean
Sun, Mar 31, 2019, 8:01am (UTC -6)
I might be the only one upset about this but I never understood why Wesley thought Kirk was responsible for attacking and not M5.
hifijohn
Mon, Apr 8, 2019, 7:48pm (UTC -6)
So whats so ultimate about it?? it diesn seem to do much beyond what computers now can do.
Other Chris
Wed, May 1, 2019, 2:52am (UTC -6)
Finally, something to chew on. Like a few here, I also thought at first that making the M-5 "evil" was undercutting the concept, but tying it to Daystrom and his flaws makes perfect sense. This instantly brought to mind the Avengers comic book storyline "Ultron Unlimited," where it's revealed that the murderous Ultron's brain engrams were based on his creator, Hank Pym. I really enjoy this idea and its resonance in fiction.
Pyotr
Thu, May 2, 2019, 12:20pm (UTC -6)
Perhaps this is a cultural difference but I am quite surprised how Dr Daystrom was allowed to address the captain with such disrespect. I was in my nation’s military and if anyone, regardless of their expertise, spoke to their commander or captain in such a manner they would have been put in confinement.
Jason R.
Thu, May 2, 2019, 1:13pm (UTC -6)
Pyotr Daestrom wasn't in Starfleet - he was a civilian. Not in the military chain of command let alone subject to military justice. And as a genius computer scientist my guess is Starfleet gave him some latitude.
Pyotr
Thu, May 9, 2019, 3:34pm (UTC -6)
Jason, I assume that the military would use it’s own scientists, that Daystrom was a Starfleet researcher.

Nonetheless, when you are on a vessel of the military at least in my country, the captain is the governor of the civilians as much as he is the commander of the soldiers, and with certain exceptions such as the President or Prime Minister who is obviously superior to even a general, being outside of the chain of command means being below it. Of course this is an American show of American values, and Americans are much more likely to tolerate disorder and allow haughty speech in the name of the golden calf called freedom which is so venerated in that society.
Chrome
Tue, Jun 4, 2019, 11:15am (UTC -6)
Great comments in this section, though I don't think this episode was trying to make a statement about AI specifically.

According to the production history, there was a time in the 1960s where Americans were actually losing jobs due to mechanization, so there was a legitimate fear that machines would become man's enemy, in a sense. The crux of the story is written to illustrate the conflict that Kirk had with Daystrom's vision - i.e. that it was possible for a machine to do a better job than Kirk and Kirk would need to consider a huge career shift that would get him out of the chair -- and possibly behind a desk! That Kirk would feel animosity towards such a change seems like a good issue to tackle.

Moving forward to the contemporary era, we saw that during Trump's election campaign, the fear of losing your job to some sort of outside force was still a compelling force. But there's always two sides to it. One might lose their job to an outsider and that could lead to a really unstable time in one's life. However, such changes aren't necessarily bad on the whole. As we've seen from our progress together with machines we feared in the 1960s, the economy isn't a zero-sum game and these outside forces can feed off each other and make a larger job pool - just with different specializations.

Though I wouldn't blame people at all for being, like Kirk, upset at the prospect of sudden and uncomfortable change, especially when it comes to something personal like a career.
Peter G.
Wed, Jun 5, 2019, 12:20pm (UTC -6)
@ Chrome,

I don't think the connotation of "Captain Dunsel" is just that Kirk will have to get used to the idea of a career change. I think the crux of it is something like humans being obsolete across the board. Daystrom, being a wonder-child, was apparently so annoyed with the inferior intellects of his fellow humans along with their stupid decision-making that he made it his life's work to see to it that they would be replaced with something superior and could be moved aside. The episode isn't played as straight-up dystpian and only hints at these matters, but I legitimately think that the issue at stake isn't losing one's job and having to retrain, but rather being told that one is no longer of any use *at all* and that things are going to be run by computers and machines from now on. Some people might well celebrate such news as salvation from work, whereas someone like Kirk would see it as the extinguishing of the human flame.

I think this episode is more prophetic than we give it credit for, and we have yet to see this scenario really come into its own. People will realize when the time comes what happens when there's no use for most of us. The two main issues in that department are: 1) if work is still required to have an income then how will most people have an income? and 2) If people have nothing left to strive for other than killing time how will society change?

The Ultimate Computer only addresses the issue of feeling the oncoming reality of being replaced, but I think it does it well. I also think it does well to have it be someone like Daystrom trying to usher in the change, because there is indeed a certain type of mentality in play where some people would like others to be deprived of the right or ability to make stupid choices. We all know and sympathize with this to a degree, but what if that little secret desire could be made a reality for everyone? It would quickly turn quite bad, I think.
Chrome
Wed, Jun 5, 2019, 2:12pm (UTC -6)
@Peter G.

Thank you for your reply. I'd like be clear that my point wasn't that this episode can't be applied to computers, but rather I think DC Fontana was aiming broader than that. I agree that, there might be legitimate apprehension that Amazon, for example, might create a smart drone that would make human mail carriers obsolete and maybe that's something lifers at UPS should be thinking about. But it also applies more broadly - to machines. Imagine that prior to the 1930s, much farming had to be done by hand, and you'd need skilled workers who trained there whole live farming just to get it right. At some point, however, tractors, auto-tillers, crop-dusters and like made much that skilled workers use redundant - and less important.

"I don't think the connotation of "Captain Dunsel" is just that Kirk will have to get used to the idea of a career change. I think the crux of it is something like humans being obsolete across the board. "

Daystrom was certainly licking his chops at the prospect of who he could replace with his inventions. But, I don't think Daystrom was going as far as to say humans would have no use. After all, he himself would certainly have job security if his computer was successful. To elaborate, I was thinking of this line as I typed my earlier comment:

KIRK: There are certain things men must do to remain men. Your computer would take that away.
DAYSTROM: There are other things a man like you might do. Or perhaps you object to the possible loss of prestige and ceremony accorded a starship captain. A computer can do your job and without all that.

I take this to mean that Kirk would still have a use in an M5-driven world, but it might not be as glamorous as being the captain of a starship. Maybe he would be at an office looking over reports from the M5 ships, or seeing over and approving command routines for upcoming models. That's still work, maybe even important work, but we the viewer can see how that wouldn't be as great of work as being captain -- especially if you worked your whole life for that specific job!

"1) if work is still required to have an income then how will most people have an income? and 2) If people have nothing left to strive for other than killing time how will society change"

Those are some great questions, and I think you, Jason, and William addressed them very well so I didn't want to get too much into it. I suppose my two cents would be that computers are great at following instructions but terrible at judgment (this episode even goes to far as saying the computer needs to utilize Daystrom's judgment in order to function and even that's still pretty buggy). So my thinking is the human brain's power to make the "right" decision is still unparalleled.
Peter G.
Wed, Jun 5, 2019, 2:59pm (UTC -6)
@ Chrome,

Agreed that there can be many angles to an episode like this one, and that it isn't just about AI specifically.

"Imagine that prior to the 1930s, much farming had to be done by hand, and you'd need skilled workers who trained there whole live farming just to get it right. At some point, however, tractors, auto-tillers, crop-dusters and like made much that skilled workers use redundant - and less important."

That's true, but now imagine that the next machine isn't just able to replace the skilled worker, but also the tractor driver, and eventually also the farmer. This is only a question of complexity, and this in this respect my point would be about AI rather than machine 'hands-on' capabilities.

"Daystrom was certainly licking his chops at the prospect of who he could replace with his inventions. But, I don't think Daystrom was going as far as to say humans would have no use. After all, he himself would certainly have job security if his computer was successful. "

There are two ways I could see this. One is that it's possible he was too blind to realize that the push towards replacement wouldn't just stop at Captains but would eventually include engineers and designers. The second I'll address below.

"DAYSTROM: There are other things a man like you might do. Or perhaps you object to the possible loss of prestige and ceremony accorded a starship captain. A computer can do your job and without all that."

I'm not at all convinced that he truly thought this 'new work' would be worth doing. In context it sounded more to me like he was essentially unsympathetic to Kirk questioning this progress. But the less charitable possibility, #2 from my other reply above, is that his response here was not entirely honest and that he knew full well that Kirk was going to be rendered basically useless. Put *even less* charitably, I might imagine that he potentially saw himself as being part of a small clique of intellectuals who would be able to control this brave new world, and that all the rest of humanity would be led by his machines. There's a great line from Frank Herbert's Dune which speaks of the great Buterlian Jihad as being caused by the following conditions:

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."

Whether or not Daystrom was aware of it (and I think he might well have been) this scenario is very foreseeable once machines (and AI) are sufficiently advanced. The oligarchy controlling the advanced machines would effectively be overlords.

That M5 specifically turns to murder can be seen as a glitch, but I'm not sure it is even within the context of the episode. We've mentioned a bit earlier in the thread how this may actually be a true accounting of Daystrom's real thinking, which M5 has been modeled on, which accounts human life at a very low value comapared to the new machine. It doesn't seem at all far-fetched to me that, long-term, the advent of machine supremacy could very well lead to the utter dimishment of the value of human life, and this episode does draw for us what happens when the humans lose control. Even the designer at a certain point can't stop what he's begun.
Jason R.
Thu, Jun 6, 2019, 5:43am (UTC -6)
"Those are some great questions, and I think you, Jason, and William addressed them very well so I didn't want to get too much into it. I suppose my two cents would be that computers are great at following instructions but terrible at judgment (this episode even goes to far as saying the computer needs to utilize Daystrom's judgment in order to function and even that's still pretty buggy). So my thinking is the human brain's power to make the "right" decision is still unparalleled."

I wanted to address this point because it's an important one. The assumption that humans will always find something else to do that computers / machines can't or that innovations like m5 will inevitably open up new opportunities for the human population is wishful thinking.

Note I don't say with certainty that it's wrong in any every instance - in the past it has held to *some* extent. But there is no real reason to believe that it will always be true, as if it's some law of the universe that human ingenuity will always triumph.

It's a fact that automation, more than outsourcing, more than any other factor, is squeezing humans out of the job market. There are certainly other forces at work to be sure but automation is the only factor that seems to only point in one direction. Faith in the triumph of the human spirit isn't a plan for a future where AI may be able to do everything from driving trucks to filling out your tax returns and writing your legal contracts. We are already very close to that point as we speak.

So when someone like Daystrom claims that he's freeing humans to do other things more suited to humans, that's no answer to Captain Dunsel, it's a hollow platitude, like telling someone "everything happens for a reason" after their wife dies. Or telling a 55 year old laid off factory worker that he should see it as an "opportunity" to start a whole new better career as he teeters into bankruptcy.

Whether M5 was truly the end of human spirit or perhaps a waypoint where men like Kirk could still carve out a shrinking niche is besides the point. It was the writing on the wall - or else it would have been if M5 hadn't gone insane homicidal because whatever.
Thomas
Thu, Jun 6, 2019, 6:48am (UTC -6)
"It's a fact that automation, more than outsourcing, more than any other factor, is squeezing humans out of the job market. There are certainly other forces at work to be sure but automation is the only factor that seems to only point in one direction. Faith in the triumph of the human spirit isn't a plan for a future where AI may be able to do everything from driving trucks to filling out your tax returns and writing your legal contracts. We are already very close to that point as we speak. "

Sounds great. Bring it on. I can't see any downsides to a future where our time no longer needs to be taken up by menial tasks.
Jason R.
Thu, Jun 6, 2019, 7:39am (UTC -6)
"Sounds great. Bring it on. I can't see any downsides to a future where our time no longer needs to be taken up by menial tasks."

I have never understood this idea of "menial work" being this terrible thing that people should seek to avoid. I am an educated professional, but whether it's been busboying, picking weeds or just cleaning my own house, I never considered simple work to be degrading. Maybe I'd feel differently if I had it as a full time job, but I think I know myself enough at this point to doubt that. If I am honest, if you took away the financial factors I might be happy working outside in a more physical "menial" job.

It's also true in my experience that the people who work in their old age, regardless of occupation, live longer and seem happier to me than people who retire. I would rather pick up trash or man a cashier in my old age than relax in a retirement home (even a nice one).

Work of any kind gives people dignity and purpose, not to mention income. The Daystrom idea of "freeing" man to do greater things isn't just wrongheaded, it is a trap that would enslave us, not make us free.
Thomas
Thu, Jun 6, 2019, 8:35am (UTC -6)
"It's also true in my experience that the people who work in their old age, regardless of occupation, live longer and seem happier to me than people who retire. I would rather pick up trash or man a cashier in my old age than relax in a retirement home (even a nice one).

Work of any kind gives people dignity and purpose, not to mention income. The Daystrom idea of "freeing" man to do greater things isn't just wrongheaded, it is a trap that would enslave us, not make us free. "

It's true that if you view life as having no purpose other than to work, which it's clear most people do, then if the only options are between relaxing and working the obvious choice would be the latter. For those in the minority that don't maintain such a view, the more time they have to pursue their chosen purpose the more likely they will be able to realise it. So I would say work is freeing only to the extent that one is chained to the notion that it is needed to give them purpose, dignity and so on. Which is rather like upgrading to a larger prison cell.
Jason R.
Thu, Jun 6, 2019, 8:46am (UTC -6)
Thomas in addition to work I also pursue certain hobbies rather passionately. I also have a family and take my leisure time seriously. I am not for second advocating for a life that *only* involves work, which seems to be the false inference you have made.

Yet meaningful work (as opposed to pure leisure) is a necessity to regulate, structure and enhance human behaviour. It is a part of a balanced life.

Eliminating it will, more often than not, destroy a person rather than free him.
Peter G.
Thu, Jun 6, 2019, 9:23am (UTC -6)
@ Thomas,

"Sounds great. Bring it on. I can't see any downsides to a future where our time no longer needs to be taken up by menial tasks."

You sound awfully confident that in this scenario you would be living a life of leisure and pleasure. What if it's the opposite and you're made into a slave of those with all the power who control the means of production? And that's putting aside the possibility of a Brave New World dystopia where your entire life is planned for you, consisting of countless pleasures but having no say and no purpose. Many would think this sounds good, which is exactly why Huxley wrote it.

"It's true that if you view life as having no purpose other than to work, which it's clear most people do, then if the only options are between relaxing and working the obvious choice would be the latter."

Jason R. is right that you're creating a false dilemma here. But you also make the mistake of referring to this as a person's "view" of life. It has nothing to do with point of view about work. It is factual (one way or the other) that a lack of meaningful work would erode and destroy people, and this premise doesn't rely on anyone's opinion. Maybe some people would enjoy it more than others, but that has nothing to do with whether they are enslaved or powerless to decide anything meaningful. If you're put in a cell, it's a matter of point of view whether it makes you miserable, but not whether you're a prisoner.

That being said I do actually think there would be potential for meaningful tasks (but not paying work) in a post-scarcity society, but that's only given the premise that it ends without the dystopia ending somehow. Basically a Trek scenario would have to happen where the tools of humanity are shared more or less equally, rather than those in power having the run of the place. Most likely that doesn't happen without a WWIII.
Chrome
Thu, Jun 6, 2019, 10:48am (UTC -6)
I think the one Trek series that gets the sweet spot for computers is TNG, like Jason mentioned above. There, Computers (and Data of course) are doing lots of important things like running the ship's routine functions. Yet the crew still uses its time well to either work on art, exercise, take martial arts, or play dangerous games in the holodeck. TNG doesn't purport that computers are perfect either, as the everyday computer glitch can often be deadly for the ship (see "Elementary, My Dear Data", "Contagion" and "Booby Trap"). I think striving for that right mix of human judgment and computer processing power is the one would should be aiming to achieve any sense a true utopia.
Jason R.
Thu, Jun 6, 2019, 11:27am (UTC -6)
But how do you do that in practice Chrome? In my local pharmacy now they have a bunch of auto checkout machines. The human employees mostly just stand there and help the customers use them. In effect they are training the store's customers to make their own jobs obsolete. It is kind of sad. The kinds of people who do these jobs (most of them are middle aged or older) aren't going to be retrained to become accountants or computer programmers - that's fantasy talk, wishful thinking. They are going to find something else (until it gets automated!) or go on welfare. And for what? So the store makes a little extra profit? A couple dollars on the stock price justifies destroying the economic fortunes of hundreds of thousands of workers?

But then you try to imagine the solution in your mind. Ok smash the checkout kiosks? Ban them? Make it illegal to computerize retail? Ok but what about ATMs? Why haven't we banned them? Should I be waiting in line for a teller every time I need a $20 bill? And what about online banking? And why not movie theaters too? They have been automated for years. Hell why aren't we using human telephone operators? Milk men?

Should we ban the automobile to bring back the buggy whip makers? This isn't slippery slope reasoning; this is just the inevitable logic of the situation. Trying to cram the genie of automation back in the bottle while trying to have a technically advanced society? This is more fantastical than warp drive and replicators.

The balance depicted in a show like TNG isn't just utopian, it's impossible. They can manufacture sentient holograms that can act as surgeons or lecture you on astrophysics or dance ballet with the knowledge and processing power of a scifi uber computer behind them but somehow only a person can pilot a ship? Uh huh.
Chrome
Thu, Jun 6, 2019, 11:47am (UTC -6)
@Jason R.

"The balance depicted in a show like TNG isn't just utopian, it's impossible. They can manufacture sentient holograms that can act as surgeons or lecture you on astrophysics or dance ballet with the knowledge and processing power of a scifi uber computer behind them but somehow only a person can pilot a ship?"

Actually, Data can pilot the ship (and Captain it too!). I'm not saying I have the answers to all your questions, but I don't think we should give up a laudable goal like balancing machine-work and human-work because there are potential pitfalls. Actually, recognizing the pitfalls like this episode does is a significant step towards making the utopia possible, in my opinion.
Peter G.
Thu, Jun 6, 2019, 12:06pm (UTC -6)
Actually, the least plausible thing in the Trek model of running a ship is navigation. The idea that they need the Captain to issue verbal orders and have someone manually key them in by hand, course correction by course correction, is so inefficient that it's a joke. I don't even think we should take that part of it seriously it's so absurd, and so it works on a narrative/drama level even though technically it's preposterous. Why would that be better than the computer automatically implementing the course corrections? How could one pilot (like Riker or Tom Paris) be better than another if it's just a question of keying in the commands? And if that's all it is, wouldn't the computer be better than either of them? And it it's not, then why is the Captain manually issuing navigation orders using coordinates? But anyhow it makes for good TV.

If we were going to take Trek seriously on a literal level for computer usage I'd say that TOS is the only one that does it right. In that series it's a given that AI is not strong enough to replace a human at most tasks, and while it can compute probabilities (such as when Spock asks it questions) it can't execute commands or make decisions. That leaves the humans to do all of that, which is a lot. The fact that we now know that computers will be better than that by the 23rd century is beside the point; in terms of internal consistency TOS was reasonable. By TNG's time, especially in showing us the Bynars and Minuet, it becomes basically implausible that the ship's computer can't replace most labor. This conceit is never addressed, which is ok, but it lingers as an "off-limits" area that the show has to accept arbitrarily, like warp drive and transporters. Data himself is an exception to this, and even then he's treated as a person rather than an example of AI. The premise there seems to be that without the hardware, which can't be copied, the programming can't be copies either. That sounds weird but there it is. By the time of VOY the AI-premise becomes really absurd, which the ship's computers already using biological technology, and with a doctor that most viewers consider sentient and who does a harder job than the navigator does.
Thomas
Fri, Jun 7, 2019, 12:53am (UTC -6)
@Peter G

"Jason R. is right that you're creating a false dilemma here. But you also make the mistake of referring to this as a person's "view" of life. It has nothing to do with point of view about work. It is factual (one way or the other) that a lack of meaningful work would erode and destroy people, and this premise doesn't rely on anyone's opinion."

I don't disagree with this as a 'factual' standalone statement, although I would amend it to say that work that is THOUGHT to be meaningful can make us happier. With a little reflection I think it would be discovered that paid employment that is ACTUALLY meaningful is extremely rare, and perhaps it would be useful to bring up a scenario to think about here:

Persons A and B, both with similar skills, apply for a job delivering goods that 80 others have applied for. Person B gets the job while A is unemployed. When interviewed, B replies that it is meaningful work and he is happier, while A replies that he would be happier with the job. But what if A had got the job instead? The goods would still have been delivered. The only difference is that B is not doing it, someone else is. The work getting done has no influence on A or B's happiness, telling us that it is not meaningful that the work is getting done but only that a particular person is doing it.

This is a very common scenario, and yet it doesn't occur to most that the work they are doing is only meaningful because they are the ones doing it, and because they interpret is as meaningful. Who knows what would be the state of affairs if they hadn't done that work? Perhaps there is an accident due to a particular attribute of a worker that wouldn't have occurred with someone else doing that job. Perhaps the extra income allows the family to go on a holiday and their plane crashes and kills them. Perhaps the job is one which will cause environmental problems in the future we are not aware of now. Who are we to tell what is beneficial and what isn't?

Yes, lack of meaning 'destroys and erodes' people, and that is why some turn to destructive habits like drugs and terrorism, in which there is also found meaning. Finding meaning in something doesn't mean we should strive to provide that something or protect it, which is what we are seeking to do when we are perceiving work as an end in itself as meaningful.
Jason R.
Fri, Jun 7, 2019, 5:18am (UTC -6)
Thomas a small correction : we do not see work as an end in of itself but working as the end.

In your scenario of course the person who got the job found it meaningful and not the person who didn't. Meaning is incidental to the specific work being done.

There are people whose job cleaning toilets gives them more meaning than some doctors get from saving lives in an ER.
Jason R.
Fri, Jun 7, 2019, 5:20am (UTC -6)
Since we are on a scifi forum I'll take this opportunity to quote the Min'bari from Babylon 5:

"You see, we create the meaning in our lives, it does not exist independently."
Thomas
Fri, Jun 7, 2019, 6:12am (UTC -6)
I would have to agree with Delenn and co there - as I find I often do. So where does that leave us with the question of automation of work? I don't think it's the role of governments to protect completely subjective preferences by holding back technological advancement. If it were we'd still be stuck with the horse and carriage and the private automobile would never have existed. And no doubt there was plenty of paranoia and fear around that particular change - what if cars started driving themselves? Can we trust them? Surely professional drivers are more experienced and trustworthy, and how will they survive?
Jason R.
Fri, Jun 7, 2019, 7:10am (UTC -6)
Agreed Thomas - except what if that road leads to 90% unemployment, depression, social disintegration and resulting chaos?

I confess I don't have a solution to this problem. Other than wishful thinking answers (automation will create new opportunities for people!) universal basic income is the only one that comes to mind. But to me it comes across as desperation - a shot in the dark, rather than a real plan. Nobody actually knows what a post-employment society would look like or how humans would adapt to this.
Chrome
Fri, Jun 7, 2019, 9:07am (UTC -6)
It’s funny because the U.S. Congress was having this same discussion when the Cotton Gin displaced slave labor 150 years ago. How little things change. :-)
Peter G.
Fri, Jun 7, 2019, 10:06am (UTC -6)
We're confusing two different issues here, one of which is a *prediction* that automation will cause considerable strife rather than being our salvation - at least at first. The other issue is whether the government should do something about it, such as banning the horseless carriage. I don't think the government realistically *can* do anything about it in the long-term. Rather, I think it's the people who will have to change their attitude towards employment and work, which in turn will cause the government to follow, lagging behind. I doubt the governments will lead the way in advance of public sentiment on this topic. Once public morale gets too low something will be done, and until then it will be every man for himself.
Chrome
Fri, Jun 7, 2019, 10:53am (UTC -6)
Not to be pedantic, Peter, but a proactive government can be an agent of automation. In example, the airport near my home now uses KIOSKs to look at someone's passport, check their criminal and other background, get their fingerprints, and find out their purchases abroad. This was not done because airlines demanded it, but rather it saves the government money and passengers find it less intrusive than speaking with a uniformed customers officer.
Peter G.
Fri, Jun 7, 2019, 11:31am (UTC -6)
@ Chrome,

I didn't mean for the government to participate in the sense of helping bring automation about. I meant for government to participate in mandating significant alterations in the public monetary system. One example of this would be the introduction of a basic income, as Jason R. mentioned, although perhaps it's not the only option. I could also imagine a return to the old trading company system, where the government could create work for people and pay them on a credit system, where they'd be entitled to spend it as cash. This already exists to an extent re: government employees but this could be greatly expanded. But then again making the government the main employer also has its own terrible risks. But I'm just offering it as an example of what I mean.

My main point is that Captain Dunsel doesn't only mean that people become obsolete and sullen, but in the short-term at least also have no reasonable means of securing an income. As this increasingly becomes the case (as it must do) the government may increasingly be pressured to provide an alternative system.
Thomas
Fri, Jun 7, 2019, 5:19pm (UTC -6)
@Peter G "We're confusing two different issues here, one of which is a *prediction* that automation will cause considerable strife rather than being our salvation - at least at first."

Whose prediction is this? It's just as likely that automation play a role in our salvation. We've already discussed how work is currently seen as salvation. If automation can liberate us from that view, and cause us to look for happiness within, then it may very well be a positive shift.
Jason R.
Sat, Jun 8, 2019, 6:09am (UTC -6)
Thomas I don't understand what you mean by a life of "looking within" versus working. Could you please explain this concept? Even in TNG, where mankind only seeks to "better itself" people clearly work in the same manner they do today. People obviously still have jobs. They just don't work for money. Is that what you are getting at?
Thomas
Sat, Jun 8, 2019, 8:35am (UTC -6)
No, nothing to do with money. Looking within is simply in contrast to looking without. You said earlier that if people didn't work they would be miserable. This is looking outside oneself for happiness, and it is the same as in TNG where it is believed that salvation (as we have been calling it) lies somewhere "out there", in this case in the form of making mankind better. Asking whether that's really true, whether my searching and striving has ever brought me true happiness, would be an example of looking within.
Jason R.
Sat, Jun 8, 2019, 1:14pm (UTC -6)
"Asking whether that's really true, whether my searching and striving has ever brought me true happiness, would be an example of looking within."

I can't speak to "true happiness" because I don't know for certain what that is and how it is differentiated from the everyday kind.

But nothing I have in my life that makes me happy, from my wife and daughter at the top of the pyramid to close friends and down to material possessions, came into my life without striving, struggle, dare I say "work".

Whether it's getting up the nerve to ask a woman out, to giving a presentation to clients, to pressing on trying to get pregnant after a heart-breaking miscarriage - it's all "work" some paid some not. Some pleasant (but no less difficult!) and some boring.

What you describe sounds like being high or stoned. I truly don't understand.
Chrome
Sat, Jun 8, 2019, 2:16pm (UTC -6)
I don’t really have a horse in this race, but instead of labeling Thomas’ position pejoratively as “being high”, there’s something to be said for seeking happiness in ways outside of manual labor. A machine can drive my car, but can it make me enjoy the trip? It’s an interesting question.

Jason, those things you mention all sound like very important achievements that machines aren’t capable of - well maybe the presentation - though I think people are more convinced by an advocate in the flesh.
Jason R.
Sat, Jun 8, 2019, 2:24pm (UTC -6)
Well Chrome to be fair Thomas went well beyond saying we shouldn't do manual work. He said we shouldn't "strive" for things outside ourselves and be content from within. I feel like I need to channel Kirk on this one because I think I know what he'd say about it. Indeed, when I mentioned being high I was thinking about the Landru worshippers and other so-called "arrested" cultures (the Organians would be perfectly on point if they weren't uber energy beings incidentally).

But I will admit I don't really understand what Thomas is getting at so I'll leave it to him to explain what he means.
Chrome
Sat, Jun 8, 2019, 3:14pm (UTC -6)
Fair enough! I don’t know exactly he means about exploration not being satisfying. Nor do I think machines can replace human curiosity to explore, which I think Kirk would passionately argue so I’m with you there.
Thomas
Sun, Jun 9, 2019, 1:59am (UTC -6)
Jason R. - I think I'd know what Kirk would say too, but he was someone who got his kicks out of interfering with alien cultures who largely didn't ask for it and didn't want it. Back when TOS was made that wasn't as well seen because it was (and still is) the same policy as the US and the UK and others who interfered and made colonies out of less powerful nations for their benefit. At the time they may have thought they were doing good, but it was only seen later the 'good' was their own. Similarly, many who watch TOS now - 50 years later - can't believe how egoistic Kirk and the ideals of the Federation are, and how the creators wouldn't have seen that more clearly.

So I'm certainly not saying 'don't strive'. I'm saying if we look closely at why we are striving we may not like what we find. Much that seemed noble or worthwhile at the time may turn out to be not so.
Sloan
Sun, Jun 23, 2019, 12:58am (UTC -6)
I've been doing a complete rewatch of TOS (actually quite a few episodes I hadn't seen before) but this is the first one that I felt the need to comment on. Jason mentions it somewhere further up the comment thread but I didn't see anyone really address it -- the M5 is making logical decisions up to the point where it decides to destroy the unmanned freighter. What possible logic or programming would make the computer decide to do that? Even if the M5's decision making is based on Daystrom's "engrams" and Daystrom is having a mental breakdown... why is this the first time M5 behaves in an illogical manner? That ship is not a threat. I suppose an argument could be made that the ore freighter is an inferior version to M5 since it seems to be an unmanned drone, but if Daystrom felt that way he would be killing everyone around him for not being a genius like him...

Dave also asks the question I was wondering by the end of the episode... The Daystrom Institute and The Daystrom Award are named after this dude? I guess he must really atone for murdering a crew and half worth of Starfleet's finest after he gets out of that rehabilitation center.
Peter G.
Tue, Jun 25, 2019, 9:34am (UTC -6)
@ Sloan,

I think you answered your own question. And actually it's a good point that I hadn't thought about before: M5 destroys the unmanned drone because it's an inferior AI to itself, and all we need to do is to realize that it hates that which is inferior and wants it to die, just like Daystrom hates the inferior humans who hold him back. And I do think his motive overall is to punish them for being inferior, although not to murder them per se. But M5 is a 'child' and so doesn't have the restraint he does in playing the long game.
William B
Tue, Jun 25, 2019, 12:32pm (UTC -6)
You know, I've been thinking some more about this, and I think I read Daystrom a little differently than Peter. I agree that he sees himself as different from other ("less intelligent") people, has some real arrogance, and seems to harbour egotistic condescension. But I think that, as much narcissism, this stems from deep insecurity:

DAYSTROM: We will survive. Nothing can hurt you. I gave you that. You are great. I am great. Twenty years of groping to prove the things I'd done before were not accidents. Seminars and lectures to rows of fools who couldn't begin to understand my systems. Colleagues. Colleagues laughing behind my back at the boy wonder and becoming famous building on my work. Building on my work.

This dialogue shows both -- but I want to emphasize "colleagues laughing behind my back at the boy wonder" here. Daystrom succeeded wildly early in life, and then after that felt empty. It's a common feature of prodigies; a somewhat less extreme version is Dr. Stubbs in TNG's Evolution, who seems worse at first glance (is not as much in hiding/denial as Daystrom) but ends up going far less crazy. His whole value was derived from other people seeing him as having accomplishments, and then without those accomplishments he had nothing left. I guess I want to emphasize here that this problem is not purely egotism, but that people who achieve highly early in life are sometimes effectively trained to view everything about themselves *except for* their achievements as worthless.

So here's the paradox, a connection that I just realized: Daystrom's problem is, in certain respects, the same one as Kirk's! Daystrom's first invention made *himself* redundant; he basically revolutionized all computer systems, with a technology so advanced that he basically put *himself* out of work, because he would never again create an invention of this calibre! Daystrom, as a result, struggled with his own redundancy for decades, until he came up with a new invention. Which means that Daystrom needed to continue to prove his worth, again and again, and could not stand the feeling of being useless, which is the thing he is ushering in for Kirk et al. The main difference IMO is that Kirk is capable of self-awareness, which Daystrom is not:

KIRK: Am I afraid of losing command to a computer? Daystrom's right. I can do a lot of other things. Am I afraid of losing the prestige and the power that goes with being a starship captain? Is that why I'm fighting it? Am I that petty?
MCCOY: Jim, if you have the awareness to ask yourself that question, you don't need me to answer it for you. Why don't you ask James T. Kirk? He's a pretty honest guy.

This makes me think, too, that the issue with the M-5 is not *purely* that it wants to RULE EVERYONE. In fact it's that it needs to *defend itself*. The thing is, technology, at least unless some AI is created which is accepted as having rights, is basically disposable unless it is useful. The M-5 has to demonstrate *its usefulness* in order to continue existing, which means that it has to have threats to eliminate, in order to prove that it is necessary to eliminate threats. "The unit must survive." It is, in a twisted way, genuinely self-defensive for the M-5 to see threats everywhere, because either something is an actual active threat to it, or it is "not a threat," in which case M-5 is no longer as necessary, and thus is more likely to be thrown in the dustbin (as Daystrom felt he was). The reason I mention this is not to make excuses for Daystrom, but because it's a slightly different "disease" with perhaps a different "cure." I think M-5 sees threats everywhere because Daystrom, on some level, sees threats everywhere -- because he is, on some level, deeply afraid of whether he has any value if he can never produce anything of value again.

Anyway, I think the best case scenario is to do what Kirk does: to recognize and value the desire to be productive and useful, while also keeping an eye out for what is *actually* good for others (and oneself), besides a need to prove one's usefulness. What this means in practice is difficult. As the discussion above has pointed out, the continuing way in which technology makes various human tasks redundant has all kinds of implications, and it's also not so clear how to stem the tide or whether that'd even be desirable.
Chrome
Tue, Jun 25, 2019, 2:07pm (UTC -6)
@William B

I love the Stubbs comparison. Stubbs' lamentation of the decline in interest in baseball is somewhat illuminating for this situation. Baseball was surely a great hit in the 20th century, with players becoming household names and legends because they could inspire others with their abilities. But according to Stubbs, baseball fell out of interest because people lost patience for it, and instead became interested in faster games. We might extrapolate then, in the world of scientific discovery - particularly in Trek - there is a sort of rat race to outdo the other guy lest one be beaten by someone faster and better. Scientists with even early great success fall victim to the idea that they need to keep upping the ante or lose their brainiac status in Federation society.

This makes Daystrom sort of a tragic figure. He did everything right once, and really made a lasting legacy (people have noted that the Daystrom Institute is still important in the 24th century). But during his own life, he suffered from living in the shadow of his own success. It makes sense that he'd be talking to Kirk about losing status, when status was something he himself was fixated on. The M5 was his chance to finally one-up himself and stay useful in his lifetime.
Peter G.
Tue, Jun 25, 2019, 2:29pm (UTC -6)
@ William B,

That's an interesting comparison, but strangely I never got the idea from The Ultimate Computer that Daystrom was actually a dunsel himself trying to prove otherwise. Maybe it's because the sort of thing he designs is so advanced, but I don't think I would have expected the sort of 'inventor' he is to be able to rapidly produce new systems to keep his fame updated. The fact of the matter is, that some things simply take so long to produce and refine that they will occupy your whole career. Einstein is a great example of this. While he did do various sorts of work over his lifetime for the most part his idee fixe was relativity, and seemed to spend the majority of his life refining it, fighting for it, and trying to explain it to people and seeing if the experimental data fit. I've read stories of physicists going to seminars where Einstein would predictably take various physics issues and bring up relativity to see if they were consistent with it. It's not because he was a one-hit wonder (and history certainly doesn't remember him that way) but rather because that one 'theory' required a lifetime of work.

Similarly, from what Daystrom describes his chief lament isn't that he was washed up but rather that his inferior collegaues laughed at him while not even understanding his theories from 20 years earlier! It's almost like they were boasting of their inferiority, that he was too weird to take seriously. And yet I seriously doubt they were scoffing at the duotronic computer system, and so therefore I have to assume that they were scoffing at him, personally. He seems to imply that they thought his inventions were an accident or something, but realistically I think "boy wonder" is the big takeaway from that speech. If we remember from TNG S1-2, Wesley was often derided by adults who didn't know him and didn't take him seriously *because he was young*, not because he was a one-hit wonder. He always had to prove that being young didn't mean that he couldn't solve problems with the big boys, and I expected that if Daystrom had revolutionized AI at the age of 15 or something that alone would have caused him to never be taken seriously no matter what his accomplishments were.

Beyond that, it strikes me as likely that the "20 years" he spent proving himself were probably related to how complicated and long the process would be to eventually develop M5. It's not like he was spinning his wheels for 20 years after having made himself redundant; I think it's that what he was doing was *so* advanced that it would take him 20 years just to progress to the next step of computer development. Since no one understood his work anyhow it would mean that they wouldn't think he was really accomplishing anything with a 20 year hiatus; they'd think that because it would suit their vanity to pretend that his teenage success was an anomaly, rather than to have to admit that he was so far superior to them that they were comparatively worthless. I suspect he really saw it that way. It's no small thing to call yourself "great". I really don't think it's an inferiority complex thing; it seems more like he sees himself as a technological Alexander the Great.
William B
Wed, Jun 26, 2019, 2:12am (UTC -6)
Good points, Chrome.

Peter, that's fair. I'm basing my read though somewhat on McCoy's interpretation:

MCCOY: The biographical tape of Richard Daystrom.
KIRK: Did you find anything?
MCCOY: Not much, aside from the fact he's a genius.
KIRK: Genius is an understatement. At the age of twenty four, he made the duotronic breakthrough that won him the Nobel and Zee-Magnes prizes.
MCCOY: In his early twenties, Jim. That's over a quarter of a century ago.
KIRK: Isn't that enough for one lifetime?
MCCOY: Maybe that's the trouble. Where do you go from up? You publish articles, you give lectures, then spend your life trying to recapture past glory.
KIRK: All right, it's difficult. What's your point?
MCCOY: The M-1 through M-4, remember? Not entirely successful. That's the way Daystrom put it.
KIRK: Genius doesn't work on an assembly line basis. Did Einstein, Kazanga, or Sitar of Vulcan produce new and revolutionary theories on a regular schedule? You can't simply say, today I will be brilliant. No matter how long it took, he came out with multitronics. The M-5.
MCCOY: Right. The government bought it, then Daystrom had to make it work. And he did. But according to Spock, it works illogically.

It may be that he is wrong, but I think McCoy's point is that this is a predictable outcome for someone who completes a lifetime's work at 24 - - that it is actually on some level unbearable to never be able to recapture that success. Rationally of course no one can expect to produce more than one scientific or technological innovation in a lifetime, which is what Kirk is saying, but that is different from Daystrom's subjective experience of his own worth. This is not confined to scientists and engineers. Child stars often burn out and get sucked into drugs; authors whose first novel is wildly successful sometimes become unhappy recluses. Orson Welles continued working but frequently resented being tied to Citizen Kane forever. Daystrom was not spinning his wheels, but I believe he was unhappy and dissatisfied (as many child prodigies become). I am not even claiming that Daystrom ever was laughed at by colleagues - - it could well have been paranoia -- but merely that he learned too early in life to tie his whole sense of self worth to his "success" before having the maturity to understand what that meant.

The other thing is that the way Daystrom repeatedly emphasizes "self-defense" in the M-5's behaviour makes me think that Daystrom himself feels very threatened, since the M-5 is based on him. This is not incompatible with his paternalistic belief he knows what's best for all of society, but I get a certain impression of emptiness, disappointment and insecurity-based fear from Daystrom, under the bluster.
Peter G.
Wed, Jun 26, 2019, 11:53am (UTC -6)
Some good points, William. The thing about McCoy's analysis is that it's based on a regular assumption that you're dealing with a regular guy; sort of begging the question in that way. If Daystrom really is deranged or a megalomaniac then an abstract statement about what an arbitrary boy wonder might have gone through wouldn't really apply. It's tough to guess whether the writers intended McCoy's comments to be authorially authoritative, or whether we're meant to wonder whether he's right. But given how Daystrom ends up by the end of the episode I personally think there's something dangerous about him.

One interesting issue is Daystrom's decision to use his own mind as a model for M5. Clearly the previous models failed because the pure AI programming was insufficient for some reason, and so he resorted to using his own brain as template to 'skip ahead'. We've talked above about why that may have caused M5's problems. But another question is why he actually felt he needed to do that in the first place. Is it because multitronics were truly just too advanced for him and he had to 'cheat' to make it happen? That he couldn't tolerate failure? That would certainly support the fear/inferiority theory you posit. Or could it have been that M1-4 worked ok but weren't as brilliant as he would have wanted them to be? Perhaps they lacked what we might call ambition, or a desire for greatness. It's interesting that he calls M5 "great", just as he is great. That sounds almost too specific for it to just mean "well-designed". It almost sounds like he thinks M5 is great in the way a great figure in history is great. Is it because deep down he needed it to be more like him in order for it to qualify as great? If so that would support my megalomania theory.

Some decent options here, and not sure I can be so certain which applies best. My basic assumption about people in general is that their innate bias is to think they're better than everyone else anyhow, and this egoism is something to combat always. For someone with objective reasons to think he's better makes that even worse. I'd almost be shocked if he *didn't* secretly have a god complex.
Sarjenka's Brother
Fri, Jul 19, 2019, 8:10pm (UTC -6)
Only the wildly inappropriate ending keeps this from being a four-star episode.

Definitely the best of the Original Series' many "computer takes over" episodes.

Submit a comment





◄ Season Index

▲Top of Page | Menu | Copyright © 1994-2019 Jamahl Epsicokhan. All rights reserved. Unauthorized duplication or distribution of any content is prohibited. This site is an independent publication and is not affiliated with or authorized by any entity or company referenced herein. See site policies.