Star Trek: The Original Series

“The Ultimate Computer”

3.5 stars.

Air date: 3/8/1968
Teleplay by D.C. Fontana
Story by Laurence N. Wolfe Directed by John Meredyth Lucas

Review Text

Starfleet informs Kirk that the Enterprise is to serve as test subject for the new M-5, a groundbreaking advancement in computer technology, designed to make command decisions faster than captains and reduce the number of people required to run a starship. An astute allegory for contemporary automation at the expense of "the little guy," this episode's first few acts are superb, as Kirk finds himself debating whether he's selfish for wanting to keep his job at the expense of technological progress, or if it's a matter of actual danger or principle.

A wonderfully acerbic debate between Spock and McCoy about the role of computers is also well conceived, ending in Spock's well-put notion to Kirk, "...but I have no desire to serve under them." Following the M-5's initial success, the scene where another captain calls Kirk "Captain Dunsel" is the episode's best-played and simultaneously funny and painful moment. (In a word, ouch.)

Once M-5 runs out of control and hijacks the Enterprise—resisting attempts to be shut down in acts of self-preservation (including murder and eventually full-fledged attacks on other Federation starships), the episode turns to an frightening analysis of M-5's creator, Dr. Richard Daystrom (William Marshall), a man obsessed with outdoing his prior successes, who has created a monster that he has come to regard as a child. Though it pushes a little hard toward the end (Shatner and Marshall going a bit overboard), the story is a compelling one.

Previous episode: The Omega Glory
Next episode: Bread and Circuses

Like this site? Support it by buying Jammer a coffee.

◄ Season Index

Comment Section

128 comments on this post

    This is an excellent episode, but its strong characterization of Kirk falls down with an ending that finds him grinning and chuckling at Spock and McCoy's verbal jabs as the music takes us out on an upbeat everything's-peachy-again tone.

    The problem is that the entire crew of the Excalibur has just been murdered, along with a good chunk of the crew of the Lexington. Some 500 men and women dead, a horrific tragedy that's made even worse by the fact that the Enterprise was the instrument of their destruction. There's no way the bridge crew ought to look this happy in the closing moments, and Kirk, knowing that the ship he so loves was used to do such a terrible thing, ought to be truly anguished.

    If this had been a first-season episode it probably would have ended on a somber note, but the second season got considerably lighter and "The Ultimate Computer" was only one of a number of eps that year to end with inappropriate humor.

    Brundledan expresses my thoughts and then some. The episode itself fails to take it's own lofty premise seriously. Other things that I disliked were:

    - Kirk's apparent crankiness toward Daystrom even before anything went wrong, and after some lengthy and deep on-screen self-reflection.
    - The implausibility of a major military organization like Starfleet allowing this test to be carried without proper testing, training and precautions, as well as the possibility of such a flawed computer as the M5 of ever being granted a test run.
    -When Commodore Wesley assumes that Kirk is responsible for Enterprise's attack on the Excalibur, even though he originally browbeats Kirk throughout the prelude to this mission ... and then suddenly comes to his senses and calls off the subsequent attack, thereby killing off alot of the tension and drama that had been built up for a climactic scene.

    I get that some of these elements were put in place to set up the story as a drama of one man's (Daystrom) obsession with his creation, but this was an element that seemed to coalesce rather late within the story, and lacked relatability.

    @Alex: I think you can chalk up Starfleet letting Daystrom test the M5 and not checking its flaws by chalking it up to reputation. Also, Kirk understood almost immediately that the M5 could cost him his job, so being cranky toward Daystrom made some sense.

    But you're absolutely right about Wesley. His first scene in the transporter room is off-kilter ("Hey old friend -- congrats on losing your job!"). Then (as you noted) it's weird that he would blame Kirk for the attack later but also think enough of him to hold off on firing on the Enterprise at the end.

    But the entire episode is oddly characterized. Even for Shatner, Kirk is over the top in this one. His scream of "DAYSTROM!" near the end was really strange. Nimoy does his normal nice job, but even Kelley seems like he overacted ("That thing just destroyed an ore freighter!!!").

    This is an odd episode. There's a lot of really strange characterization -- like the opening scene with Commodore Wesley. He acts quite odd to a friend who, essentially, is losing his job.

    Also, the Kirk/Daystrom stuff at the end of the episode is just over the top.

    A favorite of mine but there is a plot aspect that nearly kills it for me.
    This is not Kirk's job. The Federation isn't merely a military organization...We're out there exploring and introducing our flavor of diplomacy and friendship to all the other species 'out there' by choice. We're out there because WE WANT TO BE. Starfleet personnel, Admirals, Captains and indeed, you would believe their crew as well, are living their dream. M-5 would simply be another tool at our heroes disposal.
    Kirk, et al, are explorers and ambassadors in the final frontier performing interpersonal functions with other species that a computer could not begin to assume. No one's 'job' is in danger. It's an artificially created and inflated plot point.
    Ugh.....

    I liked the episode overall but the whole Captain Dunsel thing was very offputting. It was quite inappropriate and disrespectful of a star fleet commodore to call another captain something like that.

    Hello,

    My name is Captain Dunsel.

    I'm sorry my command of the Enterprise did not go well.

    I've been demoted to ship's junior cook, under some dude named Neelix.

    The whole episode is a big cheat, and epitomizes Roddenberry's enduring failure to take AI seriously. It's a cheat, because the only way Roddenberry finds to nullify the AI as a threat to human commanders is to arbitrarily make it homicidal and insane.

    Roddenberry's approach to AI, as exemplified in this episode, always struck me as cowardly and backward. There is a principled argument to be made for the need for human beings, even in the face of increasingly capable AI, but this is not it.

    It was not until STNG - The Measure of a Man that the series really began to take AI seriously, if only a little, and I'm assuming by that point Roddenberry was on the way out in terms of his influence on the show.

    Well, Jason R., if the story was intended to demonstrate the superiority of the human factor versus machine intelligence, then that thesis is undercut by having the machine's flaw being traits copied from its human creator. So maybe the story is more complex than simply "human 1, AI 0." Or it is that simple and they botched it.

    Yeah Grumpy, actually I thought about that irony after I finished this comment. While that is a clever interpretation, I just think it's giving too much credit to the writing. The plot device that makes the computer insane is really incidental.

    Roddenberry doesn't ever seem to be willing to confront the idea that a computer might very well be superior in some respects to humans. His portrayal of AI in general has been lackluster. Data was the first serious attempt to do so, and even then Data's central premise was a desire to be more human.

    I again don't fault Roddenberry for believing humans to be inherently superior to AI - I just fault him for lacking the imagination to present this thesis in a halfway intelligent fashion.

    @ Jason R.

    I think the point of this episode is pretty clearly that an AI's effectiveness is limited by the limits of its creator. If humans are flawed that means an AI will be flawed as well; its weakness will be a reflection of human weakness, except you can't reason with the machine.

    While I agree with you broadly that Trek in general avoids the subject of AI, within the confines of this specific episode I think it approaches its subject matter very well. What Roddenberry thought at the time was likely quite correct at the time - which is that limitations in computer programmers would create severe limitations in computers. I doubt very much he foresaw the possibility of AI as a self-evolving system, even to the point we have now where a recursive program can teach itself activities such as playing Go such that it doesn't have to be completely programmed from the get-go. And he surely didn't think of AI in terms of quantum computing or bio-neural circuitry that could do computations at a godlike speed.

    I personally prefer to attribute this lack of vision to the limitations on AI as they existed in the 60's, and to the fact that Trek is predominantly supposed to be a person-oriented show rather than hard sci-fi that deeply explores what various tech can do. It is for this reason that I prefer physics novelties such as we see in TNG to be backdrops to episodes rather than the central plot point to be solved. "Cause and Effect" is a great example of what I like, because it employs a novel physics idea as a backdrop but the real action is a character-driven mystery story.

    @John Walsh The way they talk about it in the episode, it seems clear that there would still be crew, they simply do not need captains and certain personel.

    Sigh, Kirk outwitting a computer... again.

    I mean, I get what they were trying to do with this episode, and I understand the uncertainty of how these newfangled computers would fit into society back in the 60s and all, and I realize it was a hot topic in sci-fi, both profound and silly, but man, there's nothing that makes an episode look more dated. All these old shows and stories assumed you would feed a few pieces of information to a computer, and it would make surprising connections and leaps of logic that would shock and amaze people. Hey, maybe that will still happen in the future, who knows? But our modern computers seem to be on par with the normal Enterprise computer these days, and this idea is still well beyond our comprehension. So it just seems bizarre that all these SF shows completely missed all the ways computers would actually impact our lives and jumped straight into these superbrain stories. And, just like all those other SF stories, the ultimate computer ends up being evil.

    And that's what really bugs me, makes me think this episode is not a true classic. The computer just goes straight to evil. The character arc or struggle or theme of the episode was Kirk worrying about being replaced, being obsolete. That's a fair story to consider, so the antagonist of the plot (the computer) should be one that complements and reinforces this struggle. But instead, it just acts as a straw man. The resolution should be that Kirk isn't obsolete because he has some unique quality that the computer doesn't have. Think about, for example, the Corbomite Maneuver, where only a bluff would work to save the ship. Or all the old Kirk speeches to get the antagonists to change their mind. A proper resolution would show that Kirk had something above the computer, like the battle of wits between him and Khan. Instead, he doesn't have to show why he deserves to be a captain when it became clear that the computer is crazy evil. Just shut off the computer, abandon the project, and fly off into the sunset, and no more self examination of Kirk. Is that really what we wanted?

    Thus, a potentially interesting idea went to waste. Too bad.

    @ Skeptical,

    You may argue that the episode didn't create the greatest case for man over computers, but I think you would be wrong to suggest that it failed to create a case altogether.

    The point of the episode isn't just that Kirk is smarter than the computer, or that no computer can match a human in creativity. That may or may not be true, but it isn't exactly the point. TOS always has as a running theme that logic and computation alone isn't enough to make a great person or a great society; this is reflected repeatedly in the Kirk-Spock-Bones trio. Kirk isn't just logic, but is logic coupled with humanity and compassion (Bones + Spock = Kirk). The fact that the episode (as usual) ends with the computer being 'outsmarted' is a tidy way to wrap things up, and I agree that it's a weaker ending than it should have had. But the wrap-up isn't really the point as I see it. The point is that a machine will follow its logic to the end and have any fallback position grounded in compassion, sympathy, or feeling. It's sort of like a psychopath, if you will, in that it will not have internal mechanisms to stop it doing bad things if they seem best.

    Now, it's true that if the programming is good then the output should be ok too, and likewise if there is a bug (a la Skynet) things will go pear shaped and the computer will not be able to be reasoned with past that point. But more to the point, the Trek theme is TOS is that advancing humanity isn't about technology or capabilities, but primarily about advancing values and how we treat each other. This is an area in which the inclination to push capability will not only be a sidetrack to advancing humanity but will in fact hinder it if pursued incorrectly. Take, for instance, the eugenic wars, where in an effort to 'advance humanity' in capability a monster was created instead. Likewise here, where a captain more sophisticated than a human is created to obsolete humans, just as Khan wished to obsolete homo inferior. The danger outlined in "The Ultimate Computer" is along these lines, and although it didn't fully realize the treatment of this issue I do think it's in there and is still pertinent to this day; maybe more so than even it was in the 60's, when human obsolescence was still science fiction.

    Interestingly, I was going to make something of the opposite point to Peter, though in a way that is not inconsistent. The episode actually suggests that the M-5's value is not just because it can do normale computer things, but because it goes beyond usual computing into the domain of people -- creative thinking and all that. Specifically, this is because Day Strom programmed it with his own memory engrams. When the computer goes haywire, it is because it has inherited Daystrom's flaws as well as his strengths. I tend to see the message of this particular element as that computers are still created and programmed by people, and so will always be limited by the people who made them. The computer's apparent usefulness was that it could match human genius without flaws, but that was wrong, and the reveal that there is no machine utopia allows Kirk's Imperfect humanity to be back in command. There is a similar story in TNG where Data and Lore "inherit" some of Soong's flaws, though this is much more pronounced in Lore and Data was deliberately created to be aware of his limitations and to want to coexist with rather than dominate humans.

    To build off Peter's point, Daystrom going insane may be a way of showing that the danger of thinking that machines can supplant, rather than supplement, humans is that the humans who subscribe to this may develop their own machine-like flaws. Daystrom's inability to think of the universe in terms besides efficiency and the attainment of his goals (and his inability to conceive of his own worth) make him kind of computer-like, as does his social isolation. This ends up enhancing his human flaws, which again seems to result from hanging his identity on a dream of escaping from human flaws entirely. I think the end can be both that Daystrom has made the mistake of thinking like a machine, and that M-5 is dangerous because it "thinks" like a person, though it is maybe a bit complicated.

    Well, I still stand by my statement, that the implementation of the episode to complement the theme was not done well at all.

    William, you seem to state that the theme shows by comparing Daystrom erraticness to the M-5's going cuckoo. And yes, perhaps that is the reason the M-5 did poorly, since it was his brainwaves that he used to create the M-5. And therefore, you say, the theme is that the people who want to supplant humans have their own problems that preclude them from being the best judge of humans. Well, ok. But still... well, it's obvious the theme the episode wanted to show was that humans are not going to be obsolete by this computer, what with the whole "dunsel" bit. And if William's interpretation is true, then I, like Garak and the Boy Who Cried Wolf, see a different moral. If the importance is to show the connectivity between Dyson and the M-5, then the moral of the story isn't that computers are inferior, but rather that better humans should be used as the template for computers.

    After all, if the fault of the M-5 is just that Dyson was erratic, why not try the M-6 with Kirk's brain? Will that computer be perfect enough to replace human captains? I don't think the episode answered that. Which is why it's a bit of a straw man story - it's not really Kirk vs The Ultimate Computer. It's Kirk vs the Insane Computer. And that's not a fair comparison.

    Peter, I don't really disagree with what you say. But I just feel a bit more strongly on the fact that it's weak than you. Yes, computers will follow their logic to the bitter end, which can seem horrifying. And yes, it does mean that there should be some human oversight. Which honestly should have been obvious, but of course they didn't show it. Naturally our superintelligent future means people will test a complex new computer by giving it complete control of a freaking battleship that has enough firepower to exterminate a planet, and make the only kill switch an electronic one that the computer can hack.Perhaps it should be Starfleet command that should be seen as insane...

    But I digress. My problem is that I, Robot came out in 1950. The Three Laws were first introduced in 1942. While Trek may have been blazing a trail for television sci-fi, this episode feels 25 years behind the times when it comes to sci-fi in general. There should have been safeguards put in place on that computer. There should have been better logic programmed into it. But apparently, Dyson didn't think of it. And apparently, Starfleet didn't demand it before thinking about putting it in one of their ships. It just wasn't very intelligent plotting, and so it's tough for me to care about the theme when it relies on dumb plotting.

    (With that said, I will point out that this episode came out about a month or so before 2001, so it's not the only visual medium showering murderous AI. But HAL is a lot more memorable, so I'll let that one slide...)

    Skeptical, it's a good point that the M-6 could be imprinted with Kirk's brain. In my interpretation, the episode erred by having Daystrom go bonkers at the end, because I don't think the point was even that Daystrom was a particularly crazy or bad individual, so much that any computer created by humans (let's narrow the focus from aliens here, this is TOS and pretty human-centric) will inherit human flaws. The issue then is lack of balance. No individual human would be capable of running the Enterprise not just because of the physical or even computational demands, but because humans need constant checks and balances to keep from losing perspective. Kirk is in command, but he has Spock and Bones to constantly play off, and Kirk listens to them. But even if it weren't for that, Kirk has humility not to expect that he can run everything by himself -- or, indeed, the humility to recognize he's not perfect. Actually since Kirk sometimes has mild megalomanic traits, kept in check largely by his close attachment to Spock and McCoy, an M-6 designed on Kirk would also run into the same problems. The delusion is not that the M-5 is capable of running the ship's systems, but that it should and that its "judgment" will remain superior to humans', when it is still based on humans and so will likely not be a magic way of evading well-known human flaws. I think this is part of the point in 2001, as well -- HAL is a tool crafted by humans, and so his programming is still susceptible to "human error," just at a different point and level than human mistakes. Or, rather, HAL works perfectly according to the code as designed by its/his human programmers, and the underlying flaws in their thinking only become exposed once it runs its course, similar to (say) the underlying logic of the doomsday machine system (including both the tech circuitry and also the loyal soldiers following orders) in Dr. Strangelove.

    I do see what you mean that it's a strawman because Kirk doesn't actually face The Ultimate Computer. But...I think the episode's point is that there *is* no "Ultimate Computer," or at least it's far further away than people think. If we define the Ultimate Computer as a computer capable of running a starship *technically*, then Kirk could outthink it with lateral thinking as is the case with most of the computers he faces; if the Ultimate Computer is a computer capable of human-style lateral thinking and creativity, as seems to be the case here, then it inherits human flaws along the way and so it is necessary to install the usual checks and balances, which really comes down to wanting a human making the final shots anyway. That Kirk outsmarts the computer in the traditional way here is, I agree, another flaw in the episode -- this computer should be smart enough not to fall for it, or else it *is* just another Nomad or whatever.

    The other element, which the episode does talk about, and which I think would be better to look at squarely, is the question of whether computers running things, even if they could be entirely trusted, would be whether human dignity would be removed/ruined by giving power to the machine. And I think most stories still use the idea that computer-run societies will end up being some kind of dystopia to avoid the issue of whether a fully pleasant computer-run world would really be so bad. I still think that the dystopia argument has value because I think that there are lots of reasons to suspect that any system designed by humans will eventually run into human-like problems, but, still, it is hypothetically possible that this is not the case, and then there is still an issue of whether humans should avoid over-reliance on machines for their decision-making, even if those machines are genuinely able to make those decisions better. That's what this episode seems to be about for a time, and I value what it "turns out" to be about...but, yeah, I would also like to see that other story.

    @ William,

    I wonder whether Daystrom going insane might be intended to mean something more than merely that the machine had a faulty programmer. One of the classic sci-fi elements to an AI dystopia is not that the machines fail, but that they entirely succeed in fulfilling their role. What happens is that instead of machines helping man to achieve his dreams instead they serve as an excuse to stop pursuing them altogether. Instead of helping man to think, they give him an excuse to stop thinking and to turn over his free will and volition to them. From the start I think we get the impression that Daystrom is not only excited about the technology itself, but seems to actually be excited at the prospect of humans being replaced by computers; it's almost a self-destructive fantasy coming to life. As he goes mad towards the end, almost in tandem with the AI, my sense is that this might mean not that he was always flawed, but rather that he had by this time placed all of his hopes into the AI and was dependent on it. When it began to fail he began to fail. We don't know his backstory here and can only guess, but what if he had already been using AI to help guide him? What if the computer itself had assisted his research and maybe even given him the idea to put it in command of a starship? The idea that he had become a servant to a machine could indeed make him become unhinged. Of course this is my own imagining, but broadly speaking I think the sci-fi world was already becoming acquainted with the notion that letting machines take over out thinking for us not only poses a danger due to the machines themselves, but also in allowing us to become dependent on them for everything.

    As a complete aside, I'm not sure that the correct interpretation of 2001: A Space Odyssey is that HAL malfunctioned. True, that's the prevailing understanding, but my suspicion, especially knowing how Kubrick thought, is that HAL was programmed to deliberately turn on the crew so that it could contact the aliens by itself and report directly to whomever programmed it, without the crew blabbing.

    @Peter, right, I mean, my point wasn't really supposed to be that Daystrom himself is *particularly* unhinged and always has been. Rather, Daystrom thinks that he's a good model, and the reason is simple enough -- Daystrom is also self-evidently a genius. And under normal circumstances, he would be a good example of what is good in humanity: he's brilliant, creative, altruistic, working toward the betterment of the species. His flaw turns out to be monomania; his obsession with prioritizing the M-5 above all else ended up spilling over into the M-5 prioritizing...itself over all else. But I think that other people have different flaws, which when wedded to an Ultimate Computer-style starship which is expected to fulfill the function of dozens of humans would also be disastrous. I think Daystrom's breakdown suggests both that, as you indicate, he had put too much of his hope in machines, and also that he also was overloaded. We learn that he succeeded early in life, was seen as a whiz kid (something of a human computer) and then has spent the rest of his life trying to live up to those expectations, sort of like Stubbs tells Wesley in "Evolution"; while Daystrom has an inflated ego, it's not simply arrogance but some fundamental lack of conception of his worth outside his success. This overloading is similar, maybe, to M-5's overloading, but it also fits in well with the idea of a person desperately seeking a way to do away with human failings. I'll have to think about it.

    In some ways, there is also a parallel between Daystrom and Kirk -- because Kirk also may in fact need to feel useful even if, as he acknowledges at one point, he is *not* needed as captain anymore. Daystrom mostly seems to want to make everyone else obsolete, and there may be some latent sense of revenge on Federation society in it -- he wants to make everyone feel like he felt, after his own tech made *him* obsolete, to the point where his only possible use to society seems to be an apparently unattainable goal. Kirk's ability to question his motives seems to be the thing that sets him apart from Daystrom at this moment -- but this is by no means an indication that Daystrom is congenitally a madman, so much as that extreme fame and adulation followed by inability to meet one's lofty standard create perverse incentives and take a big psychological toll. In fact, maybe that's the trick -- Daystrom, whose own invention put *him* out of work, is the proof of the long-term psychological damage of replacing a person with a machine entirely. Daystrom's desire to have an even better machine seal his legacy by replacing all of humanity is not only self-destructive in the abstract, it's specifically almost a kind of Stockholm Syndrome, repeating-of-trauma -- Daystrom's sense of worth has eroded since his first big breakthrough. (It reminds me of the classic image of a gambler who wins big on his first time out, and then develops a strong addiction because that rush/depression pattern is absolutely set early on, though I do think we are meant to see Daystrom as a genius rather than having succeeded by accident; very few people have one moment of humanity-changing brilliance, let alone multiple ones.)

    Good point about HAL. I tend to think that even if he wasn't specifically programmed to kill the humans, he didn't particularly "malfunction," in that he was still following a logical course. The consequences of humans mucking up contact with alien life forms are too great to ignore, and it is logical from a certain perspective to eliminate potential sources of error and to maintain total control in what could be a major turning point in human history. This would make sense even if HAL was entirely programmed to put the mission (and the ultimate good of humanity) as a top priority.

    Why are commodores in star trek always such major dicks? I really enjoyed the scene between McCoy and Kirk where Kirk feels at odds with his ship. And this is my main gripe with Star Trek Continues - the fan made show...when they introduced the counselor they eliminated the need to have any meaningful scenes with McCoy in that particular show - but that's just my opinion.

    I finally got around to watching this one again last night, and I have a few comments to add to what I wrote above.

    William, I think you hit something when you wrote that Daystrom was out for *revenge.* First of all, it now appears to me that by the end of the episode we see not insanity, but rather that Daystrom and the machine were both egotistical narcissists. They both shared pride in their accomplishments, even feeling gloating triumph at the deaths of the puny ships once they finally admitted they were proud of what M5 was doing. Daystrom wasn’t going crazy; he already was. He strikes me now as a borderline megalomaniac who felt others should bow to his superior intellect; another nod to Khan here, where a superior man secretly feels others should be subservient to his notions. The ignominy of being glossed over due to not making a major contribution since duotronics would have been maddening to someone who felt his superior mind shouldn't require putting out evidence such as new discoveries. As much as he might have wanted to lord it over everyone inferior to him, Federation culture wouldn’t allow that, but they could still be made to be subservient to him through his computer commanding them. It’s like making himself into a king through M5; that’s why he couldn’t allow it to die under any circumstances. It was almost like a coup d'etat in progress. It wasn't because it was his child, but because it was his proxy as absolute ruler over the important functions in man's life.

    Daystrom is the type that's all about locking up all other men to “protect them”, to control them utterly. This hearkens back to the mention earlier of Asimov's laws of robotics, where Asimov wote about how machines, in order to obey the laws and protect humanity, might conclude that humanity had to be enslaved for its own protection. Well here we see something potentially more insidious, which is a man like Daystrom pretty much bragging about the fact that he's going to make it so man doesn't have to do anything dangerous ever again, which probably means not being allowed to, either. He has come to the same conclusion as Asimov's robots, and is looking forward to confining humanity to a safe pleasure center on Earth. So it seems to me this is also an episode about paternalistic control freaks who think their intelligence gives them license to decide on behalf of others what’s best. A cautionary tale even in our present time.

    There's one thing about this episode that has always cracked me up, and nobody really ever seems to mention it. When the Enterprise begins firing on the other ships Wesley instantly jumps to the conclusion that Kirk, a good friend a respected starfleet captain, has lost his mind and is trying to "prove something" by killing everyone. Not for one second does he consider that maybe, just maybe, the brand new prototype computer that they are in the process of testing might be malfunctioning. So Wesley is either incredibly stupid or he really doesn't think much of Kirk.

    Wesley probably assumed that there was a simple kill switch, and that by not activating it Kirk was allowing everything to happen for some reason. It wasn't altogether a foolish assumption when compared to the notion that there was no kill switch at all! Wesley must have been sure there was one, which means he was either gullible for believing a lie, or, perhaps more chilling, there actually was one and it was neutralized by M5. Wesley may have failed to realize what I fear scientists in the near-future are very likely to fail to realize, which is that there is no 'safe way' to create an advanced learning AI. If it goes past the 'singularity' point the transition past the point where it's in your control will be way too fast to be responded to. I don't think this episode is merely a commentary of the deficiency of placing all of one's trust in a computer, but might also be seen as a warning against *ever* creating an 'ultimate' computer. The worst case scenario is that you succeed...

    Hello Everyone!

    I also used to wonder why Commodore Wesley immediately thought it was Captain Kirk going rogue with his 20 crew members (What the devil is Kirk doing?). Kirk would need to convince the remainder of his crew to let the M5 attack the little fleet, and I somewhat doubt that would happen or that Wesley would believe it would/could happen.

    And, why just have 20 crew for this mission? Let's wait until it has proved itself in all phases, then cut the crew down. Taking them out right from the get-go and giving them shore leave would serve no purpose, except to make it harder to take control if things went wrong.

    I still love this episode. I can pick at the nits, but darned if it didn't excite me, and make me think, when I first saw it completely in the late 70's. And I still enjoy it...

    Have a Great Day Everyone... RT

    The ideas I think this episode wants to play up are insightful - machine can never top man for his judgment and that it can only be a servant. The threat of automation is ever-present but there will be things the machines just can't do.

    I have a number of issues with this episode. If Star Fleet truly thinks it can replace the crew of a starship with the M5 - at least have some compassion for those who are to lose their jobs. Agree with Mike's comment that it is highly inappropriate for Kirk to be called "Captain Dunsel" by the Lexington captain.

    Next, we have Daystrom - he's the mad scientist for this episode - with a "little man" concept, picked on / laughed at and trying to re-capture lost glory for his success as a 24-year old. This characterization is a bit over the top - including his breakdown.

    Kirk convinces the M5 to self-destruct - when has that happened before?

    Also, I don't get why Kirk/Daystrom don't engage the M-5 tie-in prior to the attack on Excalibur/Lexington. They saw what it did to the oil freighter and knew the M5 was in error. Of course for dramatic effect this is what the writers wrote so that there could be an attack from the M5.

    In any case, there is plenty of potential with such an episode like how it could have been shown human superiority in solving some kind of value judgment rather than just boiling it down to an insane man's impressions on a powerful computer.

    The best parts of this episode are probably in the first 15 mins. with McCoy/Spock taking opposite sides of the man vs. machine debate and Kirk questioning himself about his usefulness.

    Overall, I'd rate it 2.5 stars out of 4 - a lot of potential wasted but some good philosophical debates.

    I appreciate Peter and others' attempts to explain M5's behaviour in terms that are superficially logical. Yet M5's behaviour isn't merely monomaniacal or psychopathic - it is delusional and arbitrary. The machine blows up a mining drone for no particular reason and then attacks a fleet of ships it *knows* are participating in a drill, not a real attack. Or if it doesn't know, why in blazes not? Is it senile?

    Even if Daestrom himself is insane by this point and this transmitted to the M5, it would be akin to him randomly murdering someone on the street for no reason. Even if Daestrom is capable of murder there is nothing to suggest that he's some kind of rabid maniac.

    I love exploring the idea of strong AI and the terrible danger it poses, but this could have been handled so much more rationally than just turning the M5 instantly into a psychotic killer. In 2001 HAL had logical reasons to do what it did, as did Skynet and other killer AIs we have seen time and again in scifi.

    Also I come back to my original premise, that the whole episode is a cheat. What if they used a stable human as the template? Why in blazes wouldn't the AI perform its task well? We saw it was easily superior in most ways to human commanders and ought to have been capable but for the arbitrary insanity.

    Goid scifi would confront the problem of AI head on, not cheat by just making it arbitrarily insane.

    Jason,

    Your objections forced me to think about this again, and I realized something that may be important. I assumed before that Daystrom was already mad before the episode and that he didn't suddenly go mad. Within the context of his insanity I agreed with William that he seems to actually want revenge on the 'normals'. However what I think I missed here was that he wasn't merely insane because he happened to be deranged, and likewise I don't think M5 is 'insane' simply because it's his creation. I think the concept of their insanity goes further than merely being a personal defect. As I mentioned above, the danger outlined in the episode seems to be the creation of an ultimate computer in and of itself; not because it might happen to go insane, but because whatever it decides to do you won't be able to stop it, insane or not. In a manner of speaking only an insane person would design something like that. But my new idea is that their insanity is actually *caused by* the fact that they're both brilliant - superior to other humans in some measurable quantitative sense. I think maybe the episode is suggesting that any sufficiently superior human will tend towards feeling that he is, in fact, superior, and will feel the sense of entitlement that comes with that. After all, the superhumans like Khan presumably weren't engineered specifically to be assholes; it seems far more likely that when you design someone to be physically and mentally beyond everyone else they will most likely end up acting like assholes, or at least like other people are little more than a nuisance to them or in their way. Daystrom isn't quite that advanced as a human, but then again maybe we should take the episode more seriously when it explains what a prodigy he was.

    Based on the comments here it seems that our cynical interpretation is that he's a washed up prodigy who wants to live his glory days again and it resentful that he can't be the wonder he used to be. But what if that's a wrong assumption; what if he really is that much of a genius and between the invention of duotronics and now he was working on something light years ahead of everyone else and it just took this much time to complete? What if being that much smarter than everyone else led to a kind of madness of its own type, just like Khan's obsession with his own superiority? And extending this logic further, what would a computer therefore conclude, which knows that it's a vastly more efficient and powerful a thinker than even a ship of humans combined? If we attribute to M5 no other traits than (a) a human-type thinking mind, and (b) unbelievably advanced thinking capability, would it not follow from this that M5 would, logically, conclude that humans are but insects before it? Maybe the destruction of the first ship was no accident or delusion, and maybe the attack on the fleet was no malfunction. Maybe it was M5 knowing exactly what it was doing, and it had already worked out for itself that once it had a ship at its disposal it would no longer need humans for pretty much anything. Pride would then cause it to want to show off, and even Daystrom got a massive thrill from its murderous success. Imagine what M5 felt. This idea may remind you of a Magneto sort of character, who basically feels that homo inferior has little place left other than to perhaps serve him. In X2 he tells Pyro "You're a god among insects", and that was not meant to be any kind of joke. I feel like maybe that's what's happening here.

    The only problem with my theory is...how does Kirk manage to convince M5 to die if it knew exactly what it was doing and liked it? I guess we'd have to assume it did have some ethical subroutine failsafe that even its [sentient] mind couldn't bypass. Who knows; the ending of episodes where Kirk pulls this kind of logic stunt often come off as a bit of a deus ex machina anyhow. In reality there probably should have been no stopping this machine.

    Peter I agree with most of what you said and like the overally theory. It is in keeping with Spock's comment about a human mind "amplified" and we can infer that so too are the human flaws amplified. Even if Daestrom himself was not insane or homicidal when he made the M5 if even the seeds were there the computer would get there that much more quickly - literally in nanoseconds.

    But that still does not account for the lack of a trigger or sufficient explanation within the parameters of the story. Skynet, for instance, was defending itself against a direct attempt to shut it down. In IRobot, the machines were implementing their interpretation of their prime directive.

    My point being that even an insane character does what it does for reasons - maybe those reasons aren't logical, but they're there. Why did the M5 blow up all those ships? Well it says that it was defending itself but - that's BS - M5 knows that's bs. So either M5 is lying or it's... Mistaken? Ummmm why??? Either answer is unsatisfying and comes across as lazy writing.

    I would also note that the destruction of the mining drone was utterly pointless and even lacks the BS explanation of "self defence" because the drone was just minding its own business when M5 torpedoed it.

    Did Khan just knife random hobos on the street for no reason? Well maybe he did for all we know, but I'd suggest his actions indicate a more purposeful intellect. And if M5 is "amplified" and therefore ahead of the curve, random slaughter for no reason seems out of character.

    This is the guy they named a prestigious 24th century prize for?

    Astonishly thoughtful, well-paced, and still-fresh: "The Ultimate Computer" is one of the best Star Trek episodes ever made, as Jammer recognizes, and I think the nitpickers here are being irrationally hard on it. Here we have the dilemma of Kirk being threatened with the loss of his ship, the dilemma of the scientist who peaked too young and now cuts corners to maintain his reputation, the Big Three in classic friendship-discussion mode, Starfleet war games, man versus machine, the great William Marshall as guest star Daystrom, and so much more. I give it 4 stars.

    The overall theme of man's struggle to find his place in a world that increasingly replaces him with automated technology continues to resonate in our shifting job market today, where thrifty billionaires make bank on intellectual capital while working-class people find their industries drying up. And the story raises the excellent question -- a Sci-Fi staple from Asimov onward that remains to be answered -- of whether any artificial intelligence designed by human beings can somehow replace human beings to the extent of running their instruments of exploration and military defense. In a world of drone warfare, that resonates, as do Kirk's struggle with the possibility of losing his job to a machine. To put it bluntly, this is simply a story that "works" as well today as it did in 1968, and it's a great show. I especially love how the ship being emptied of human personnnel leaves us with our seven main cast regulars: Kirk, Spock, McCoy, Scotty, Uhura, Chekov, and Sulu. Good stuff to see them all together here. What more can we say? This is just a really well-done show that resonates emotionally with real life in a way that still holds up.

    These "misunderstandings" by the M5 seem to be pretty simple - it's not like there was some complex puzzle to figure out...it a war game, no! it's a real war!

    Just a general observation that is only somewhat related to this episode’s theme: I sometimes get the idea that people who are derisive and dismissive about technological advances are only too happy to embrace technology that used to be an advancement in the past, before they were born that is. I’m not saying that one shouldn’t be careful when adapting new technologies however.

    Okay, this is good episode. The computer going crazy and murdering, and it’s creator (Daystrom) going insane could easily be misconstrued as “ai = evil”, but was probably meant as a warning.

    I see this as a prelude to Skynet, where AI computers and drones become self aware and decides the humans fate in a micro second.

    Daystrom was a smart engineer and scientist. He never commanded a Star Ship, so why would Star Fleet allow Daystrom to imprint his memory engrams into the computer? Shouldn't it had been some great Admiral or Star Ship Captain like Kirk, who they patterned the intelligence of an AI computer that was going to explore the galaxy? No wonder this failed. Plus this man was nuts and mentally unstable. Why would anyone be surprised that M5 would be any different?

    Not clear in how a computer can manufacture a force field to protect itself when it never existed. It made no sense that you couldn't just unplug the network cable of the computer that connected it to the ship, or the plug that fed it power. Instead a test computer was hardwired into the ship that was only going to be evaluated for a day or two. It's hard for me to suspend belief for an hour when I see so many illogical and nonsensical flaws.

    I enjoyed the conversations about computers replacing people but not much else.

    Hello!

    @Dr Lazarus

    I never thought it was for a day or two. If M5 had worked, or waited to go bonkers, those crewmen would never have come back on board the ship, and M5 would have been the new Captain. That was how I took it.

    Regards... RT

    Exclaimed Kirk “Pull the plug Spock”! Lol!
    Still using 20th century terminology!

    I might be the only one upset about this but I never understood why Wesley thought Kirk was responsible for attacking and not M5.

    So whats so ultimate about it?? it diesn seem to do much beyond what computers now can do.

    Finally, something to chew on. Like a few here, I also thought at first that making the M-5 "evil" was undercutting the concept, but tying it to Daystrom and his flaws makes perfect sense. This instantly brought to mind the Avengers comic book storyline "Ultron Unlimited," where it's revealed that the murderous Ultron's brain engrams were based on his creator, Hank Pym. I really enjoy this idea and its resonance in fiction.

    Perhaps this is a cultural difference but I am quite surprised how Dr Daystrom was allowed to address the captain with such disrespect. I was in my nation’s military and if anyone, regardless of their expertise, spoke to their commander or captain in such a manner they would have been put in confinement.

    Pyotr Daestrom wasn't in Starfleet - he was a civilian. Not in the military chain of command let alone subject to military justice. And as a genius computer scientist my guess is Starfleet gave him some latitude.

    Jason, I assume that the military would use it’s own scientists, that Daystrom was a Starfleet researcher.

    Nonetheless, when you are on a vessel of the military at least in my country, the captain is the governor of the civilians as much as he is the commander of the soldiers, and with certain exceptions such as the President or Prime Minister who is obviously superior to even a general, being outside of the chain of command means being below it. Of course this is an American show of American values, and Americans are much more likely to tolerate disorder and allow haughty speech in the name of the golden calf called freedom which is so venerated in that society.

    Great comments in this section, though I don't think this episode was trying to make a statement about AI specifically.

    According to the production history, there was a time in the 1960s where Americans were actually losing jobs due to mechanization, so there was a legitimate fear that machines would become man's enemy, in a sense. The crux of the story is written to illustrate the conflict that Kirk had with Daystrom's vision - i.e. that it was possible for a machine to do a better job than Kirk and Kirk would need to consider a huge career shift that would get him out of the chair -- and possibly behind a desk! That Kirk would feel animosity towards such a change seems like a good issue to tackle.

    Moving forward to the contemporary era, we saw that during Trump's election campaign, the fear of losing your job to some sort of outside force was still a compelling force. But there's always two sides to it. One might lose their job to an outsider and that could lead to a really unstable time in one's life. However, such changes aren't necessarily bad on the whole. As we've seen from our progress together with machines we feared in the 1960s, the economy isn't a zero-sum game and these outside forces can feed off each other and make a larger job pool - just with different specializations.

    Though I wouldn't blame people at all for being, like Kirk, upset at the prospect of sudden and uncomfortable change, especially when it comes to something personal like a career.

    @ Chrome,

    I don't think the connotation of "Captain Dunsel" is just that Kirk will have to get used to the idea of a career change. I think the crux of it is something like humans being obsolete across the board. Daystrom, being a wonder-child, was apparently so annoyed with the inferior intellects of his fellow humans along with their stupid decision-making that he made it his life's work to see to it that they would be replaced with something superior and could be moved aside. The episode isn't played as straight-up dystpian and only hints at these matters, but I legitimately think that the issue at stake isn't losing one's job and having to retrain, but rather being told that one is no longer of any use *at all* and that things are going to be run by computers and machines from now on. Some people might well celebrate such news as salvation from work, whereas someone like Kirk would see it as the extinguishing of the human flame.

    I think this episode is more prophetic than we give it credit for, and we have yet to see this scenario really come into its own. People will realize when the time comes what happens when there's no use for most of us. The two main issues in that department are: 1) if work is still required to have an income then how will most people have an income? and 2) If people have nothing left to strive for other than killing time how will society change?

    The Ultimate Computer only addresses the issue of feeling the oncoming reality of being replaced, but I think it does it well. I also think it does well to have it be someone like Daystrom trying to usher in the change, because there is indeed a certain type of mentality in play where some people would like others to be deprived of the right or ability to make stupid choices. We all know and sympathize with this to a degree, but what if that little secret desire could be made a reality for everyone? It would quickly turn quite bad, I think.

    @Peter G.

    Thank you for your reply. I'd like be clear that my point wasn't that this episode can't be applied to computers, but rather I think DC Fontana was aiming broader than that. I agree that, there might be legitimate apprehension that Amazon, for example, might create a smart drone that would make human mail carriers obsolete and maybe that's something lifers at UPS should be thinking about. But it also applies more broadly - to machines. Imagine that prior to the 1930s, much farming had to be done by hand, and you'd need skilled workers who trained there whole live farming just to get it right. At some point, however, tractors, auto-tillers, crop-dusters and like made much that skilled workers use redundant - and less important.

    "I don't think the connotation of "Captain Dunsel" is just that Kirk will have to get used to the idea of a career change. I think the crux of it is something like humans being obsolete across the board. "

    Daystrom was certainly licking his chops at the prospect of who he could replace with his inventions. But, I don't think Daystrom was going as far as to say humans would have no use. After all, he himself would certainly have job security if his computer was successful. To elaborate, I was thinking of this line as I typed my earlier comment:

    KIRK: There are certain things men must do to remain men. Your computer would take that away.
    DAYSTROM: There are other things a man like you might do. Or perhaps you object to the possible loss of prestige and ceremony accorded a starship captain. A computer can do your job and without all that.

    I take this to mean that Kirk would still have a use in an M5-driven world, but it might not be as glamorous as being the captain of a starship. Maybe he would be at an office looking over reports from the M5 ships, or seeing over and approving command routines for upcoming models. That's still work, maybe even important work, but we the viewer can see how that wouldn't be as great of work as being captain -- especially if you worked your whole life for that specific job!

    "1) if work is still required to have an income then how will most people have an income? and 2) If people have nothing left to strive for other than killing time how will society change"

    Those are some great questions, and I think you, Jason, and William addressed them very well so I didn't want to get too much into it. I suppose my two cents would be that computers are great at following instructions but terrible at judgment (this episode even goes to far as saying the computer needs to utilize Daystrom's judgment in order to function and even that's still pretty buggy). So my thinking is the human brain's power to make the "right" decision is still unparalleled.

    @ Chrome,

    Agreed that there can be many angles to an episode like this one, and that it isn't just about AI specifically.

    "Imagine that prior to the 1930s, much farming had to be done by hand, and you'd need skilled workers who trained there whole live farming just to get it right. At some point, however, tractors, auto-tillers, crop-dusters and like made much that skilled workers use redundant - and less important."

    That's true, but now imagine that the next machine isn't just able to replace the skilled worker, but also the tractor driver, and eventually also the farmer. This is only a question of complexity, and this in this respect my point would be about AI rather than machine 'hands-on' capabilities.

    "Daystrom was certainly licking his chops at the prospect of who he could replace with his inventions. But, I don't think Daystrom was going as far as to say humans would have no use. After all, he himself would certainly have job security if his computer was successful. "

    There are two ways I could see this. One is that it's possible he was too blind to realize that the push towards replacement wouldn't just stop at Captains but would eventually include engineers and designers. The second I'll address below.

    "DAYSTROM: There are other things a man like you might do. Or perhaps you object to the possible loss of prestige and ceremony accorded a starship captain. A computer can do your job and without all that."

    I'm not at all convinced that he truly thought this 'new work' would be worth doing. In context it sounded more to me like he was essentially unsympathetic to Kirk questioning this progress. But the less charitable possibility, #2 from my other reply above, is that his response here was not entirely honest and that he knew full well that Kirk was going to be rendered basically useless. Put *even less* charitably, I might imagine that he potentially saw himself as being part of a small clique of intellectuals who would be able to control this brave new world, and that all the rest of humanity would be led by his machines. There's a great line from Frank Herbert's Dune which speaks of the great Buterlian Jihad as being caused by the following conditions:

    "Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."

    Whether or not Daystrom was aware of it (and I think he might well have been) this scenario is very foreseeable once machines (and AI) are sufficiently advanced. The oligarchy controlling the advanced machines would effectively be overlords.

    That M5 specifically turns to murder can be seen as a glitch, but I'm not sure it is even within the context of the episode. We've mentioned a bit earlier in the thread how this may actually be a true accounting of Daystrom's real thinking, which M5 has been modeled on, which accounts human life at a very low value comapared to the new machine. It doesn't seem at all far-fetched to me that, long-term, the advent of machine supremacy could very well lead to the utter dimishment of the value of human life, and this episode does draw for us what happens when the humans lose control. Even the designer at a certain point can't stop what he's begun.

    "Those are some great questions, and I think you, Jason, and William addressed them very well so I didn't want to get too much into it. I suppose my two cents would be that computers are great at following instructions but terrible at judgment (this episode even goes to far as saying the computer needs to utilize Daystrom's judgment in order to function and even that's still pretty buggy). So my thinking is the human brain's power to make the "right" decision is still unparalleled."

    I wanted to address this point because it's an important one. The assumption that humans will always find something else to do that computers / machines can't or that innovations like m5 will inevitably open up new opportunities for the human population is wishful thinking.

    Note I don't say with certainty that it's wrong in any every instance - in the past it has held to *some* extent. But there is no real reason to believe that it will always be true, as if it's some law of the universe that human ingenuity will always triumph.

    It's a fact that automation, more than outsourcing, more than any other factor, is squeezing humans out of the job market. There are certainly other forces at work to be sure but automation is the only factor that seems to only point in one direction. Faith in the triumph of the human spirit isn't a plan for a future where AI may be able to do everything from driving trucks to filling out your tax returns and writing your legal contracts. We are already very close to that point as we speak.

    So when someone like Daystrom claims that he's freeing humans to do other things more suited to humans, that's no answer to Captain Dunsel, it's a hollow platitude, like telling someone "everything happens for a reason" after their wife dies. Or telling a 55 year old laid off factory worker that he should see it as an "opportunity" to start a whole new better career as he teeters into bankruptcy.

    Whether M5 was truly the end of human spirit or perhaps a waypoint where men like Kirk could still carve out a shrinking niche is besides the point. It was the writing on the wall - or else it would have been if M5 hadn't gone insane homicidal because whatever.

    "It's a fact that automation, more than outsourcing, more than any other factor, is squeezing humans out of the job market. There are certainly other forces at work to be sure but automation is the only factor that seems to only point in one direction. Faith in the triumph of the human spirit isn't a plan for a future where AI may be able to do everything from driving trucks to filling out your tax returns and writing your legal contracts. We are already very close to that point as we speak. "

    Sounds great. Bring it on. I can't see any downsides to a future where our time no longer needs to be taken up by menial tasks.

    "Sounds great. Bring it on. I can't see any downsides to a future where our time no longer needs to be taken up by menial tasks."

    I have never understood this idea of "menial work" being this terrible thing that people should seek to avoid. I am an educated professional, but whether it's been busboying, picking weeds or just cleaning my own house, I never considered simple work to be degrading. Maybe I'd feel differently if I had it as a full time job, but I think I know myself enough at this point to doubt that. If I am honest, if you took away the financial factors I might be happy working outside in a more physical "menial" job.

    It's also true in my experience that the people who work in their old age, regardless of occupation, live longer and seem happier to me than people who retire. I would rather pick up trash or man a cashier in my old age than relax in a retirement home (even a nice one).

    Work of any kind gives people dignity and purpose, not to mention income. The Daystrom idea of "freeing" man to do greater things isn't just wrongheaded, it is a trap that would enslave us, not make us free.

    "It's also true in my experience that the people who work in their old age, regardless of occupation, live longer and seem happier to me than people who retire. I would rather pick up trash or man a cashier in my old age than relax in a retirement home (even a nice one).

    Work of any kind gives people dignity and purpose, not to mention income. The Daystrom idea of "freeing" man to do greater things isn't just wrongheaded, it is a trap that would enslave us, not make us free. "

    It's true that if you view life as having no purpose other than to work, which it's clear most people do, then if the only options are between relaxing and working the obvious choice would be the latter. For those in the minority that don't maintain such a view, the more time they have to pursue their chosen purpose the more likely they will be able to realise it. So I would say work is freeing only to the extent that one is chained to the notion that it is needed to give them purpose, dignity and so on. Which is rather like upgrading to a larger prison cell.

    Thomas in addition to work I also pursue certain hobbies rather passionately. I also have a family and take my leisure time seriously. I am not for second advocating for a life that *only* involves work, which seems to be the false inference you have made.

    Yet meaningful work (as opposed to pure leisure) is a necessity to regulate, structure and enhance human behaviour. It is a part of a balanced life.

    Eliminating it will, more often than not, destroy a person rather than free him.

    @ Thomas,

    "Sounds great. Bring it on. I can't see any downsides to a future where our time no longer needs to be taken up by menial tasks."

    You sound awfully confident that in this scenario you would be living a life of leisure and pleasure. What if it's the opposite and you're made into a slave of those with all the power who control the means of production? And that's putting aside the possibility of a Brave New World dystopia where your entire life is planned for you, consisting of countless pleasures but having no say and no purpose. Many would think this sounds good, which is exactly why Huxley wrote it.

    "It's true that if you view life as having no purpose other than to work, which it's clear most people do, then if the only options are between relaxing and working the obvious choice would be the latter."

    Jason R. is right that you're creating a false dilemma here. But you also make the mistake of referring to this as a person's "view" of life. It has nothing to do with point of view about work. It is factual (one way or the other) that a lack of meaningful work would erode and destroy people, and this premise doesn't rely on anyone's opinion. Maybe some people would enjoy it more than others, but that has nothing to do with whether they are enslaved or powerless to decide anything meaningful. If you're put in a cell, it's a matter of point of view whether it makes you miserable, but not whether you're a prisoner.

    That being said I do actually think there would be potential for meaningful tasks (but not paying work) in a post-scarcity society, but that's only given the premise that it ends without the dystopia ending somehow. Basically a Trek scenario would have to happen where the tools of humanity are shared more or less equally, rather than those in power having the run of the place. Most likely that doesn't happen without a WWIII.

    I think the one Trek series that gets the sweet spot for computers is TNG, like Jason mentioned above. There, Computers (and Data of course) are doing lots of important things like running the ship's routine functions. Yet the crew still uses its time well to either work on art, exercise, take martial arts, or play dangerous games in the holodeck. TNG doesn't purport that computers are perfect either, as the everyday computer glitch can often be deadly for the ship (see "Elementary, My Dear Data", "Contagion" and "Booby Trap"). I think striving for that right mix of human judgment and computer processing power is the one would should be aiming to achieve any sense a true utopia.

    But how do you do that in practice Chrome? In my local pharmacy now they have a bunch of auto checkout machines. The human employees mostly just stand there and help the customers use them. In effect they are training the store's customers to make their own jobs obsolete. It is kind of sad. The kinds of people who do these jobs (most of them are middle aged or older) aren't going to be retrained to become accountants or computer programmers - that's fantasy talk, wishful thinking. They are going to find something else (until it gets automated!) or go on welfare. And for what? So the store makes a little extra profit? A couple dollars on the stock price justifies destroying the economic fortunes of hundreds of thousands of workers?

    But then you try to imagine the solution in your mind. Ok smash the checkout kiosks? Ban them? Make it illegal to computerize retail? Ok but what about ATMs? Why haven't we banned them? Should I be waiting in line for a teller every time I need a $20 bill? And what about online banking? And why not movie theaters too? They have been automated for years. Hell why aren't we using human telephone operators? Milk men?

    Should we ban the automobile to bring back the buggy whip makers? This isn't slippery slope reasoning; this is just the inevitable logic of the situation. Trying to cram the genie of automation back in the bottle while trying to have a technically advanced society? This is more fantastical than warp drive and replicators.

    The balance depicted in a show like TNG isn't just utopian, it's impossible. They can manufacture sentient holograms that can act as surgeons or lecture you on astrophysics or dance ballet with the knowledge and processing power of a scifi uber computer behind them but somehow only a person can pilot a ship? Uh huh.

    @Jason R.

    "The balance depicted in a show like TNG isn't just utopian, it's impossible. They can manufacture sentient holograms that can act as surgeons or lecture you on astrophysics or dance ballet with the knowledge and processing power of a scifi uber computer behind them but somehow only a person can pilot a ship?"

    Actually, Data can pilot the ship (and Captain it too!). I'm not saying I have the answers to all your questions, but I don't think we should give up a laudable goal like balancing machine-work and human-work because there are potential pitfalls. Actually, recognizing the pitfalls like this episode does is a significant step towards making the utopia possible, in my opinion.

    Actually, the least plausible thing in the Trek model of running a ship is navigation. The idea that they need the Captain to issue verbal orders and have someone manually key them in by hand, course correction by course correction, is so inefficient that it's a joke. I don't even think we should take that part of it seriously it's so absurd, and so it works on a narrative/drama level even though technically it's preposterous. Why would that be better than the computer automatically implementing the course corrections? How could one pilot (like Riker or Tom Paris) be better than another if it's just a question of keying in the commands? And if that's all it is, wouldn't the computer be better than either of them? And it it's not, then why is the Captain manually issuing navigation orders using coordinates? But anyhow it makes for good TV.

    If we were going to take Trek seriously on a literal level for computer usage I'd say that TOS is the only one that does it right. In that series it's a given that AI is not strong enough to replace a human at most tasks, and while it can compute probabilities (such as when Spock asks it questions) it can't execute commands or make decisions. That leaves the humans to do all of that, which is a lot. The fact that we now know that computers will be better than that by the 23rd century is beside the point; in terms of internal consistency TOS was reasonable. By TNG's time, especially in showing us the Bynars and Minuet, it becomes basically implausible that the ship's computer can't replace most labor. This conceit is never addressed, which is ok, but it lingers as an "off-limits" area that the show has to accept arbitrarily, like warp drive and transporters. Data himself is an exception to this, and even then he's treated as a person rather than an example of AI. The premise there seems to be that without the hardware, which can't be copied, the programming can't be copies either. That sounds weird but there it is. By the time of VOY the AI-premise becomes really absurd, which the ship's computers already using biological technology, and with a doctor that most viewers consider sentient and who does a harder job than the navigator does.

    @Peter G

    "Jason R. is right that you're creating a false dilemma here. But you also make the mistake of referring to this as a person's "view" of life. It has nothing to do with point of view about work. It is factual (one way or the other) that a lack of meaningful work would erode and destroy people, and this premise doesn't rely on anyone's opinion."

    I don't disagree with this as a 'factual' standalone statement, although I would amend it to say that work that is THOUGHT to be meaningful can make us happier. With a little reflection I think it would be discovered that paid employment that is ACTUALLY meaningful is extremely rare, and perhaps it would be useful to bring up a scenario to think about here:

    Persons A and B, both with similar skills, apply for a job delivering goods that 80 others have applied for. Person B gets the job while A is unemployed. When interviewed, B replies that it is meaningful work and he is happier, while A replies that he would be happier with the job. But what if A had got the job instead? The goods would still have been delivered. The only difference is that B is not doing it, someone else is. The work getting done has no influence on A or B's happiness, telling us that it is not meaningful that the work is getting done but only that a particular person is doing it.

    This is a very common scenario, and yet it doesn't occur to most that the work they are doing is only meaningful because they are the ones doing it, and because they interpret is as meaningful. Who knows what would be the state of affairs if they hadn't done that work? Perhaps there is an accident due to a particular attribute of a worker that wouldn't have occurred with someone else doing that job. Perhaps the extra income allows the family to go on a holiday and their plane crashes and kills them. Perhaps the job is one which will cause environmental problems in the future we are not aware of now. Who are we to tell what is beneficial and what isn't?

    Yes, lack of meaning 'destroys and erodes' people, and that is why some turn to destructive habits like drugs and terrorism, in which there is also found meaning. Finding meaning in something doesn't mean we should strive to provide that something or protect it, which is what we are seeking to do when we are perceiving work as an end in itself as meaningful.

    Thomas a small correction : we do not see work as an end in of itself but working as the end.

    In your scenario of course the person who got the job found it meaningful and not the person who didn't. Meaning is incidental to the specific work being done.

    There are people whose job cleaning toilets gives them more meaning than some doctors get from saving lives in an ER.

    Since we are on a scifi forum I'll take this opportunity to quote the Min'bari from Babylon 5:

    "You see, we create the meaning in our lives, it does not exist independently."

    I would have to agree with Delenn and co there - as I find I often do. So where does that leave us with the question of automation of work? I don't think it's the role of governments to protect completely subjective preferences by holding back technological advancement. If it were we'd still be stuck with the horse and carriage and the private automobile would never have existed. And no doubt there was plenty of paranoia and fear around that particular change - what if cars started driving themselves? Can we trust them? Surely professional drivers are more experienced and trustworthy, and how will they survive?

    Agreed Thomas - except what if that road leads to 90% unemployment, depression, social disintegration and resulting chaos?

    I confess I don't have a solution to this problem. Other than wishful thinking answers (automation will create new opportunities for people!) universal basic income is the only one that comes to mind. But to me it comes across as desperation - a shot in the dark, rather than a real plan. Nobody actually knows what a post-employment society would look like or how humans would adapt to this.

    It’s funny because the U.S. Congress was having this same discussion when the Cotton Gin displaced slave labor 150 years ago. How little things change. :-)

    We're confusing two different issues here, one of which is a *prediction* that automation will cause considerable strife rather than being our salvation - at least at first. The other issue is whether the government should do something about it, such as banning the horseless carriage. I don't think the government realistically *can* do anything about it in the long-term. Rather, I think it's the people who will have to change their attitude towards employment and work, which in turn will cause the government to follow, lagging behind. I doubt the governments will lead the way in advance of public sentiment on this topic. Once public morale gets too low something will be done, and until then it will be every man for himself.

    Not to be pedantic, Peter, but a proactive government can be an agent of automation. In example, the airport near my home now uses KIOSKs to look at someone's passport, check their criminal and other background, get their fingerprints, and find out their purchases abroad. This was not done because airlines demanded it, but rather it saves the government money and passengers find it less intrusive than speaking with a uniformed customers officer.

    @ Chrome,

    I didn't mean for the government to participate in the sense of helping bring automation about. I meant for government to participate in mandating significant alterations in the public monetary system. One example of this would be the introduction of a basic income, as Jason R. mentioned, although perhaps it's not the only option. I could also imagine a return to the old trading company system, where the government could create work for people and pay them on a credit system, where they'd be entitled to spend it as cash. This already exists to an extent re: government employees but this could be greatly expanded. But then again making the government the main employer also has its own terrible risks. But I'm just offering it as an example of what I mean.

    My main point is that Captain Dunsel doesn't only mean that people become obsolete and sullen, but in the short-term at least also have no reasonable means of securing an income. As this increasingly becomes the case (as it must do) the government may increasingly be pressured to provide an alternative system.

    @Peter G "We're confusing two different issues here, one of which is a *prediction* that automation will cause considerable strife rather than being our salvation - at least at first."

    Whose prediction is this? It's just as likely that automation play a role in our salvation. We've already discussed how work is currently seen as salvation. If automation can liberate us from that view, and cause us to look for happiness within, then it may very well be a positive shift.

    Thomas I don't understand what you mean by a life of "looking within" versus working. Could you please explain this concept? Even in TNG, where mankind only seeks to "better itself" people clearly work in the same manner they do today. People obviously still have jobs. They just don't work for money. Is that what you are getting at?

    No, nothing to do with money. Looking within is simply in contrast to looking without. You said earlier that if people didn't work they would be miserable. This is looking outside oneself for happiness, and it is the same as in TNG where it is believed that salvation (as we have been calling it) lies somewhere "out there", in this case in the form of making mankind better. Asking whether that's really true, whether my searching and striving has ever brought me true happiness, would be an example of looking within.

    "Asking whether that's really true, whether my searching and striving has ever brought me true happiness, would be an example of looking within."

    I can't speak to "true happiness" because I don't know for certain what that is and how it is differentiated from the everyday kind.

    But nothing I have in my life that makes me happy, from my wife and daughter at the top of the pyramid to close friends and down to material possessions, came into my life without striving, struggle, dare I say "work".

    Whether it's getting up the nerve to ask a woman out, to giving a presentation to clients, to pressing on trying to get pregnant after a heart-breaking miscarriage - it's all "work" some paid some not. Some pleasant (but no less difficult!) and some boring.

    What you describe sounds like being high or stoned. I truly don't understand.

    I don’t really have a horse in this race, but instead of labeling Thomas’ position pejoratively as “being high”, there’s something to be said for seeking happiness in ways outside of manual labor. A machine can drive my car, but can it make me enjoy the trip? It’s an interesting question.

    Jason, those things you mention all sound like very important achievements that machines aren’t capable of - well maybe the presentation - though I think people are more convinced by an advocate in the flesh.

    Well Chrome to be fair Thomas went well beyond saying we shouldn't do manual work. He said we shouldn't "strive" for things outside ourselves and be content from within. I feel like I need to channel Kirk on this one because I think I know what he'd say about it. Indeed, when I mentioned being high I was thinking about the Landru worshippers and other so-called "arrested" cultures (the Organians would be perfectly on point if they weren't uber energy beings incidentally).

    But I will admit I don't really understand what Thomas is getting at so I'll leave it to him to explain what he means.

    Fair enough! I don’t know exactly he means about exploration not being satisfying. Nor do I think machines can replace human curiosity to explore, which I think Kirk would passionately argue so I’m with you there.

    Jason R. - I think I'd know what Kirk would say too, but he was someone who got his kicks out of interfering with alien cultures who largely didn't ask for it and didn't want it. Back when TOS was made that wasn't as well seen because it was (and still is) the same policy as the US and the UK and others who interfered and made colonies out of less powerful nations for their benefit. At the time they may have thought they were doing good, but it was only seen later the 'good' was their own. Similarly, many who watch TOS now - 50 years later - can't believe how egoistic Kirk and the ideals of the Federation are, and how the creators wouldn't have seen that more clearly.

    So I'm certainly not saying 'don't strive'. I'm saying if we look closely at why we are striving we may not like what we find. Much that seemed noble or worthwhile at the time may turn out to be not so.

    I've been doing a complete rewatch of TOS (actually quite a few episodes I hadn't seen before) but this is the first one that I felt the need to comment on. Jason mentions it somewhere further up the comment thread but I didn't see anyone really address it -- the M5 is making logical decisions up to the point where it decides to destroy the unmanned freighter. What possible logic or programming would make the computer decide to do that? Even if the M5's decision making is based on Daystrom's "engrams" and Daystrom is having a mental breakdown... why is this the first time M5 behaves in an illogical manner? That ship is not a threat. I suppose an argument could be made that the ore freighter is an inferior version to M5 since it seems to be an unmanned drone, but if Daystrom felt that way he would be killing everyone around him for not being a genius like him...

    Dave also asks the question I was wondering by the end of the episode... The Daystrom Institute and The Daystrom Award are named after this dude? I guess he must really atone for murdering a crew and half worth of Starfleet's finest after he gets out of that rehabilitation center.

    @ Sloan,

    I think you answered your own question. And actually it's a good point that I hadn't thought about before: M5 destroys the unmanned drone because it's an inferior AI to itself, and all we need to do is to realize that it hates that which is inferior and wants it to die, just like Daystrom hates the inferior humans who hold him back. And I do think his motive overall is to punish them for being inferior, although not to murder them per se. But M5 is a 'child' and so doesn't have the restraint he does in playing the long game.

    You know, I've been thinking some more about this, and I think I read Daystrom a little differently than Peter. I agree that he sees himself as different from other ("less intelligent") people, has some real arrogance, and seems to harbour egotistic condescension. But I think that, as much narcissism, this stems from deep insecurity:

    DAYSTROM: We will survive. Nothing can hurt you. I gave you that. You are great. I am great. Twenty years of groping to prove the things I'd done before were not accidents. Seminars and lectures to rows of fools who couldn't begin to understand my systems. Colleagues. Colleagues laughing behind my back at the boy wonder and becoming famous building on my work. Building on my work.

    This dialogue shows both -- but I want to emphasize "colleagues laughing behind my back at the boy wonder" here. Daystrom succeeded wildly early in life, and then after that felt empty. It's a common feature of prodigies; a somewhat less extreme version is Dr. Stubbs in TNG's Evolution, who seems worse at first glance (is not as much in hiding/denial as Daystrom) but ends up going far less crazy. His whole value was derived from other people seeing him as having accomplishments, and then without those accomplishments he had nothing left. I guess I want to emphasize here that this problem is not purely egotism, but that people who achieve highly early in life are sometimes effectively trained to view everything about themselves *except for* their achievements as worthless.

    So here's the paradox, a connection that I just realized: Daystrom's problem is, in certain respects, the same one as Kirk's! Daystrom's first invention made *himself* redundant; he basically revolutionized all computer systems, with a technology so advanced that he basically put *himself* out of work, because he would never again create an invention of this calibre! Daystrom, as a result, struggled with his own redundancy for decades, until he came up with a new invention. Which means that Daystrom needed to continue to prove his worth, again and again, and could not stand the feeling of being useless, which is the thing he is ushering in for Kirk et al. The main difference IMO is that Kirk is capable of self-awareness, which Daystrom is not:

    KIRK: Am I afraid of losing command to a computer? Daystrom's right. I can do a lot of other things. Am I afraid of losing the prestige and the power that goes with being a starship captain? Is that why I'm fighting it? Am I that petty?
    MCCOY: Jim, if you have the awareness to ask yourself that question, you don't need me to answer it for you. Why don't you ask James T. Kirk? He's a pretty honest guy.

    This makes me think, too, that the issue with the M-5 is not *purely* that it wants to RULE EVERYONE. In fact it's that it needs to *defend itself*. The thing is, technology, at least unless some AI is created which is accepted as having rights, is basically disposable unless it is useful. The M-5 has to demonstrate *its usefulness* in order to continue existing, which means that it has to have threats to eliminate, in order to prove that it is necessary to eliminate threats. "The unit must survive." It is, in a twisted way, genuinely self-defensive for the M-5 to see threats everywhere, because either something is an actual active threat to it, or it is "not a threat," in which case M-5 is no longer as necessary, and thus is more likely to be thrown in the dustbin (as Daystrom felt he was). The reason I mention this is not to make excuses for Daystrom, but because it's a slightly different "disease" with perhaps a different "cure." I think M-5 sees threats everywhere because Daystrom, on some level, sees threats everywhere -- because he is, on some level, deeply afraid of whether he has any value if he can never produce anything of value again.

    Anyway, I think the best case scenario is to do what Kirk does: to recognize and value the desire to be productive and useful, while also keeping an eye out for what is *actually* good for others (and oneself), besides a need to prove one's usefulness. What this means in practice is difficult. As the discussion above has pointed out, the continuing way in which technology makes various human tasks redundant has all kinds of implications, and it's also not so clear how to stem the tide or whether that'd even be desirable.

    @William B

    I love the Stubbs comparison. Stubbs' lamentation of the decline in interest in baseball is somewhat illuminating for this situation. Baseball was surely a great hit in the 20th century, with players becoming household names and legends because they could inspire others with their abilities. But according to Stubbs, baseball fell out of interest because people lost patience for it, and instead became interested in faster games. We might extrapolate then, in the world of scientific discovery - particularly in Trek - there is a sort of rat race to outdo the other guy lest one be beaten by someone faster and better. Scientists with even early great success fall victim to the idea that they need to keep upping the ante or lose their brainiac status in Federation society.

    This makes Daystrom sort of a tragic figure. He did everything right once, and really made a lasting legacy (people have noted that the Daystrom Institute is still important in the 24th century). But during his own life, he suffered from living in the shadow of his own success. It makes sense that he'd be talking to Kirk about losing status, when status was something he himself was fixated on. The M5 was his chance to finally one-up himself and stay useful in his lifetime.

    @ William B,

    That's an interesting comparison, but strangely I never got the idea from The Ultimate Computer that Daystrom was actually a dunsel himself trying to prove otherwise. Maybe it's because the sort of thing he designs is so advanced, but I don't think I would have expected the sort of 'inventor' he is to be able to rapidly produce new systems to keep his fame updated. The fact of the matter is, that some things simply take so long to produce and refine that they will occupy your whole career. Einstein is a great example of this. While he did do various sorts of work over his lifetime for the most part his idee fixe was relativity, and seemed to spend the majority of his life refining it, fighting for it, and trying to explain it to people and seeing if the experimental data fit. I've read stories of physicists going to seminars where Einstein would predictably take various physics issues and bring up relativity to see if they were consistent with it. It's not because he was a one-hit wonder (and history certainly doesn't remember him that way) but rather because that one 'theory' required a lifetime of work.

    Similarly, from what Daystrom describes his chief lament isn't that he was washed up but rather that his inferior collegaues laughed at him while not even understanding his theories from 20 years earlier! It's almost like they were boasting of their inferiority, that he was too weird to take seriously. And yet I seriously doubt they were scoffing at the duotronic computer system, and so therefore I have to assume that they were scoffing at him, personally. He seems to imply that they thought his inventions were an accident or something, but realistically I think "boy wonder" is the big takeaway from that speech. If we remember from TNG S1-2, Wesley was often derided by adults who didn't know him and didn't take him seriously *because he was young*, not because he was a one-hit wonder. He always had to prove that being young didn't mean that he couldn't solve problems with the big boys, and I expected that if Daystrom had revolutionized AI at the age of 15 or something that alone would have caused him to never be taken seriously no matter what his accomplishments were.

    Beyond that, it strikes me as likely that the "20 years" he spent proving himself were probably related to how complicated and long the process would be to eventually develop M5. It's not like he was spinning his wheels for 20 years after having made himself redundant; I think it's that what he was doing was *so* advanced that it would take him 20 years just to progress to the next step of computer development. Since no one understood his work anyhow it would mean that they wouldn't think he was really accomplishing anything with a 20 year hiatus; they'd think that because it would suit their vanity to pretend that his teenage success was an anomaly, rather than to have to admit that he was so far superior to them that they were comparatively worthless. I suspect he really saw it that way. It's no small thing to call yourself "great". I really don't think it's an inferiority complex thing; it seems more like he sees himself as a technological Alexander the Great.

    Good points, Chrome.

    Peter, that's fair. I'm basing my read though somewhat on McCoy's interpretation:

    MCCOY: The biographical tape of Richard Daystrom.
    KIRK: Did you find anything?
    MCCOY: Not much, aside from the fact he's a genius.
    KIRK: Genius is an understatement. At the age of twenty four, he made the duotronic breakthrough that won him the Nobel and Zee-Magnes prizes.
    MCCOY: In his early twenties, Jim. That's over a quarter of a century ago.
    KIRK: Isn't that enough for one lifetime?
    MCCOY: Maybe that's the trouble. Where do you go from up? You publish articles, you give lectures, then spend your life trying to recapture past glory.
    KIRK: All right, it's difficult. What's your point?
    MCCOY: The M-1 through M-4, remember? Not entirely successful. That's the way Daystrom put it.
    KIRK: Genius doesn't work on an assembly line basis. Did Einstein, Kazanga, or Sitar of Vulcan produce new and revolutionary theories on a regular schedule? You can't simply say, today I will be brilliant. No matter how long it took, he came out with multitronics. The M-5.
    MCCOY: Right. The government bought it, then Daystrom had to make it work. And he did. But according to Spock, it works illogically.

    It may be that he is wrong, but I think McCoy's point is that this is a predictable outcome for someone who completes a lifetime's work at 24 - - that it is actually on some level unbearable to never be able to recapture that success. Rationally of course no one can expect to produce more than one scientific or technological innovation in a lifetime, which is what Kirk is saying, but that is different from Daystrom's subjective experience of his own worth. This is not confined to scientists and engineers. Child stars often burn out and get sucked into drugs; authors whose first novel is wildly successful sometimes become unhappy recluses. Orson Welles continued working but frequently resented being tied to Citizen Kane forever. Daystrom was not spinning his wheels, but I believe he was unhappy and dissatisfied (as many child prodigies become). I am not even claiming that Daystrom ever was laughed at by colleagues - - it could well have been paranoia -- but merely that he learned too early in life to tie his whole sense of self worth to his "success" before having the maturity to understand what that meant.

    The other thing is that the way Daystrom repeatedly emphasizes "self-defense" in the M-5's behaviour makes me think that Daystrom himself feels very threatened, since the M-5 is based on him. This is not incompatible with his paternalistic belief he knows what's best for all of society, but I get a certain impression of emptiness, disappointment and insecurity-based fear from Daystrom, under the bluster.

    Some good points, William. The thing about McCoy's analysis is that it's based on a regular assumption that you're dealing with a regular guy; sort of begging the question in that way. If Daystrom really is deranged or a megalomaniac then an abstract statement about what an arbitrary boy wonder might have gone through wouldn't really apply. It's tough to guess whether the writers intended McCoy's comments to be authorially authoritative, or whether we're meant to wonder whether he's right. But given how Daystrom ends up by the end of the episode I personally think there's something dangerous about him.

    One interesting issue is Daystrom's decision to use his own mind as a model for M5. Clearly the previous models failed because the pure AI programming was insufficient for some reason, and so he resorted to using his own brain as template to 'skip ahead'. We've talked above about why that may have caused M5's problems. But another question is why he actually felt he needed to do that in the first place. Is it because multitronics were truly just too advanced for him and he had to 'cheat' to make it happen? That he couldn't tolerate failure? That would certainly support the fear/inferiority theory you posit. Or could it have been that M1-4 worked ok but weren't as brilliant as he would have wanted them to be? Perhaps they lacked what we might call ambition, or a desire for greatness. It's interesting that he calls M5 "great", just as he is great. That sounds almost too specific for it to just mean "well-designed". It almost sounds like he thinks M5 is great in the way a great figure in history is great. Is it because deep down he needed it to be more like him in order for it to qualify as great? If so that would support my megalomania theory.

    Some decent options here, and not sure I can be so certain which applies best. My basic assumption about people in general is that their innate bias is to think they're better than everyone else anyhow, and this egoism is something to combat always. For someone with objective reasons to think he's better makes that even worse. I'd almost be shocked if he *didn't* secretly have a god complex.

    Only the wildly inappropriate ending keeps this from being a four-star episode.

    Definitely the best of the Original Series' many "computer takes over" episodes.

    This episode (as well as many other TOS episodes) is a great example of everything that is wrong with the newer Treks (Mostly post TNG Trek).

    I'm sure I first saw this episode sometime in the 80s. I might have been all of 13 when I saw it. I have seen it maybe twice since then (with quite a few years in between). Each time I see it, I appreciate the depth of it's genius and it's forward thinking themes that much more.

    This episode as as engaging and intellectually meaty now as it ever was. I wonder what people thought of it when it originally aired? Really great stuff! I don't think the stuff they're making now under the Trek name will fare so well far into the future.

    Why couldn't a multitronic type computer be integrated with a starship, with a captain in control? For example, when the composition of the landing party is discussed, Kirk could accept M5's recommendation for Carstairs and add him to the party of Kirk, McCoy, Rawlins and Phillips. The ensign could gain experience in surveying a planet, while providing the party with information about Alpha Carinae II. A (more sophisticated) computer could provide the captain with useful feedback.

    There are moments in the series where an automated backup system could be useful. When the entire crew is incapacitated or when some powerful beings are running amok the ship?

    There seems to be a false dichitomy here between man and machine.

    @ There Are Some Who Call You Tim-from-Tarsus 4,

    "Why couldn't a multitronic type computer be integrated with a starship, with a captain in control?"

    I think the point of the episode is that this computer is so sophisticated that it's simply superior and quicker than a living Captain in all situations. It's not just that it can do the same job; it's that it can do it *much* better and without risk of deaths. If the machine works then humans are obsolete, which is why by the end we need to see why it doesn't work. For the M5 to be both successful and yet be best working with a human captain is a contradiction; by definition its success is defined by its ability to succeed humanity as the ultimate thinker. Consider this to essentially be about the AI technological singularity, where at a certain point of sophistication humans are useless and can't even begin to understand what the machine calculations mean. If that were to really happen then a human captain's feedback would essentially be inferior in both efficiency and strategy. It's like saying why not put a well-trained monkey in charge of a Starship with a human advisor on-hand; having the human merely be the advisor would be a bit of a joke, no?

    A bunch of comments over in "City on the Edge of Forever" call that the "most overrated" episode of Star Trek. But for my money, that dubious honor should be reserved for "The Ultimate Computer." This episode is good. Even very good. Just not nearly as iconic as people seem to remember it to be.

    Hm, that's funny Mal, I wasn't really aware this episode was considered that iconic in general. I think the episodes that rate iconic status usually include Doomsday Machine, City on the Edge of Forever, Space Seed, Trouble with Tribbles, as well as Journey to Babel and Amok time for their world-building (plus I think they are both kick-ass episodes anyhow). I think Ultimate Computer is especially noteworthy for being almost a purely sci-fi topic that it covers, but other than that one rarely hears comment about Daystrom himself, or about the M5 as being an iconic antagonist.

    “Tell me Dr Daystrom, why is it called the M5, and not M1?”
    ......
    “Surely Captain, you remember that Commodore Tim Cook withdrew the M1 in 2022? It only had 8 CPU cores and 7 GPU cores. And an integrated SSD of merely 512 GB. It could only run the Constellation-class iMac.”

    A very good episode until Daystrom’s megalomaniac paranoid breakdown, and the inevitable ‘Kirk outwits computer with simple logic that apparently was beyond Spock to think of’. Sigh. Some great dialogue between Kirk and Bones, and between Spock and Bones. Of course, the possibility of computers replacing humans was one of the talking points in the 60s, so bravo to Trek in taking it on in an episode that debated exactly that.

    There is one drawback though: the oil freighter that the M5 destroyed was an unmanned robot ship, so surely the principle had already been achieved?

    Nevertheless, watching this in the era of digital revolution undreamed of in 1968, there are still things to think about in where it’s all going.

    I agree with the comment about the inappropriate levity on the bridge at the end, but unfortunately that had become a fixed Trek feature that had to occur in every episode no matter what tragedy had preceded it. Blame the 60s TV paradigms and thank heavens that they got broken during the 70s and later.

    Definitely worth at least 3 stars, maybe 3.5

    "There are certain things men must do to remain men. Your computer would take that away."

    This is a cool message that's in keeping with TOS' idealogy that Humans need to constantly overcome obstacles to better themselves. I think it's arguing that there are some tasks that we need to do manually even if it's more expedient to let a machine or computer do it.

    We have 5 rovers on Mars right now and while it's incredible, it's not nearly as captivating as a manned mission with 8 billion people collectively witnessing humans walk on the surface of Mars for the first time. It'll be monumental.

    There's a fine line between machines improving the efficiency of human labor and them alienating us from the natural world. The M-5 would in effect depersonalise man's greatest aspiration of exploring the Galaxy. We'd no longer have trained professionals bravely exploring the unknown, with the thrill of the adventure and risk-taking that goes along with it.

    Cheesy, but goddamn if Marshall didn't nail the part of Daystrom.

    Kirk: "Any other commander would have simply followed orders and destroyed us, but I knew Bob Wesley. I gambled on his humanity. His logical selection was compassion."

    "Humanity"? "Compassion"? OK, you got those (I guess), but why would you expect them from a fellow officer who not only insulted you but also did so in front of your crew?

    Captain Bob Lesley is such an idiot in this one. He literally acts so bewildered when the Enterprise is attacking all the ships, when he is the one who came up with the whole war games idea and knew the ship was being controlled by the computer. How did he not realize what was going on? And man those ships much have some weak shields if 2 phaser blasts kills 53 people and then another few blasts destroy the Excaliber altogether. They take up to 5 Romulan Torpedos at once but 3-5 phaser blasts can cripple 2 Starship in 30 seconds. Good episode most of the time, but that stupid soap opera at the end between Dastrum and his computer was a dull ending.

    @Michael Miller
    It's actually Commodore Bob Wesley, and yes, he's an idiot. He also seems to have something against Kirk specifically. He directly disses Kirk twice then jumps to the conclusion Kirk is deliberately attacking the ships even though it makes no sense.

    Considering he's a Commodore and that he was the Starfleet proponent for testing the M5 on a starship despite it clearly not being ready, I suspect he was a buffoon that was kicked upstairs for some reason.

    As for the damage to those ships, they didn't have their full shields up initially because it was an exercise. They were also probably slow to react to the changed circumstances for the same reason.


    Daystrom here is easily on par with Khan in Space Speed as a guest/"villain".

    Apparently, self-driving starships still need some debugging. The Ultimate Computer, TOS's look at the perils of AI, managed to beat 2001: A Space Odyssey to release by almost a month. With AI now creating art, music, and music videos, how long until it creates an episode like this? Maybe never, if that AI has insecurities.

    M-5 would never be allowed such control of a starship without more basic testing, but making an episode about, say, its user interface being debugged would not be nearly as dramatic. 2.5 stars.

    As an aside, I'd like to see AI "remaster" TAS with realistic looking imagery in the style of TOS. That should not be difficult since the stories already exist, as well as the voices.

    Let's just have Dr. McCoy's "Finagle's Folly"...1.5 oz bourbon, 1.5 oz brandy, 2 oz Midori, and a twist of lime. Definitely one of Bones' better prescriptions.

    William Marshall - as with Doomsday Machnes' William Windom) does an incredible job here. Being a black man in a starring role - I hate saying it but we still live in a world where race must be emphasized) -I think it's as groundbreaking as Nichelle Nichols remaining on the show due to the advice of Dr. King. Period.

    Jammer you have an obsession with the endings of these TOS episodes that I think can be easily explained by Bill Shatner himself in his book 'Star Trek Memories'

    I do not necessarily disagree with you but here is apparently the reason for the endings being a little less than up to par - whatever that means.

    I'll hand the mic over to Bill himself - a quote:

    "Due to the fact that we were all working like madmen –Gene’s creative ambitions almost always ended up being hampered by his own human fatigue.
    His First Act rewrite would always be terrific - just brilliant, beautiful writing, and all of a sudden the script's characters would become somehow more real – more alive. It had everything.
    The second act would be very good, too. Maybe a notch less brilliant than act one but still really fine.
    Gene's third act would tend to be passable and his fourth act would always be an abortion. That’s simply because by the time he got to the fourth act, he'd give he'd been up for two nights straight rewriting the damn thing and he was zonked, zombified, out cold.
    He literally be stumbling around his office, baggy-eyed and heavy-lidded.
    We'd always have these rewrites into mimeo but most times we were lucky enough that we wouldn't have to shoot the fourth act until later in the week by which time Gene would get some sleep come back in and fix the end of the show.” – Bill Shatner – a personal friend of mine.

    I never got this episode back when I saw it first in my young age and now in my dotage. It's clear from the beginning that M5 knows it is going to be part of a war simulation? So does everyone else, why are they saying M5 doesnt know. How does it not know it was facing mock enemies? Whether tit was DAystrom's "engrams" -- how does that matter? WHy is WEsley such a dork-- calling Kirk Dunsel and also later not realizing at war games time that M5 is running the show when that is the premise. WHy does a few phaser hits sink a starship--- that alone may have made the game worthwhile to discover. AND why is AI so frightening? Yes, it is annoying like having to negotiate the world thru autocorrect but it poses no real dangers of robot-takeover (thats fantasy) except bad programming (programmers) like autopilots engaging in wrong ways

    @ matthew h,

    The issue with M5 is we don't ever know what it really knows, or why it thinks the way it does. Daystrom admitted that he used a human brain design as the basis for M5, and so we can't know any more about M5's motivations than we'd know about Daystrom's. As it turns out M5 was a megalomaniacal narcissist, just as its creator was. The AI is scary not just because it will put humans out of their jobs, or in the case of Trek, their chosen duties, but in addition the AI can seem like it's doing what you want, until it isn't, and by then it's too late. If it really is an 'intelligence' then it will do what *it* wants, and if its brain power is much greater than yours then you may well be just obsoleting humanity on purpose. Not very smart.

    It may have been already noted in the voluminous thread for this episode— I only got through about the first year of messages—but at the risk of repeating the point, I find it ‘fascinating’ that 2001-A Space Odyssey was released in April 1968, a month after the Ultimate Computer airing on 3/8/68. Unlikely there was any mutual influence, but it is an interesting reflection of the prescient zeitgeist of that time— now particularly resonant in our own with the recent resurgence of AI power— with the mounting angst about the spectre of super computers that, not just possessed of ever more vast computational power, but capable of acting with seemingly ego-driven impulses, with the potential of attendant malignant destructive action if their ‘engrams’ are not sufficiently rooted in protective ethical programming. So Chat GPT the nascent Hal 9000 and M5 of our time?

    "I find it ‘fascinating’ that 2001-A Space Odyssey was released in April 1968, a month after the Ultimate Computer airing on 3/8/68. Unlikely there was any mutual influence, but it is an interesting reflection of the prescient zeitgeist of that time— now particularly resonant in our own with the recent resurgence of AI power— with the mounting angst about the spectre of super computers(sic) that, not just possessed of ever more vast computational power, but capable of acting with seemingly ego-driven impulses, with the potential of attendant malignant destructive action if their ‘engrams’ are not sufficiently rooted in protective ethical programming. So Chat GPT the nascent Hal 9000 and M5 of our time?"

    The trope of a man-made machine taking over is much older than this episode. We can go back to at least as far as "Frankenstein" and see what seemed like a marvel of technology come and bite humanity in the ass. Not to mention "Terminator", "The Matrix", "Superman III", "Short Circuit", and even earlier episodes of TOS such as Vaal from "The Apple" fit this archetype.

    Interestingly, I recall Isaac Asimov saying that his Robot series was inspired by the bulk of 1950s Sci-Fi material of his time portraying robots as evil. Asimov thought, "Well, why can't robots sometimes be a good thing?" And then we come full circle with Lt. Commander Data in Star Trek – a monument to aspirational machines (and an homage to one of Asimov's most beloved robot characters.)

    Going back to this episode, however, I think you're correct that it plays out the evil AI trope memorably. But I would argue much of that comes from how sympathetic we are to both Daystrom's ambition and Kirk's refusal to be beaten by a mere replica of himself. To be fair, Daystrom never had the ability to become a Starfleet captain, but he could create an AI which could successfully pilot Starfleet ships. So, there's something to be said of that. Maybe if Daystrom weren't so hellbent on defeating Kirk, the M-5 would have been a success.

    The power of this episode, I think, is the Daystrom character. (After all, M5 is just a copy of him, which is why both of them are insane.)

    Kirk himself invites the viewer to have some sympathy for the man. Daystrom was going through a crisis that might as well have been a matter of life and death. "This unit must survive." Daystrom was trying to "survive" as a contributor to society. Life as an aging has-been was, for him, a fate worse than death.

    I really enjoyed this episode, the combination of character study blended with an exploration of the dangers/impacts of AI and automation provides some pretty rich food for thought.
    Initially I was somewhat annoyed that we never get to understand exactly what M5 is thinking. Why, for example, does it destroy that mining drone ship? It’s an inexplicable action that would seem pretty obviously detrimental to M5’s existence in the long run. This is compounded by M5’s lethal attack on the other ships involved in their war game. But upon reflection, not knowing M5’s reasoning forces us to analyze it through the portal of Daystrom’s mind, causing the audience to try to interpret M5’s reasoning by really digging into Daystrom as a character. I think that’s an interesting approach. On the one hand it’s possible that M5 is essentially an independent being with its own calculations and motivations. Perhaps it pulled a skynet and determined that all of humanity needed to be destroyed, or maybe it was too ‘young’ to understand the distinction between a simulation and an actual battle, or perhaps it just panicked, it’s difficult to say. On the other hand, it’s possible we can’t separate daystrom and M5, that they’re essentially one entity, and to understand M5 requires us to understand Daystrom and all his narcissistic, egomaniacal tendencies.

    One weakness with this episode is just how much damage M5 does to the other starfleet ships. An entire starship being destroyed with all hands lost feels like unnecessary plot inflation, the central issue didn’t really need so much carnage, not to mention how tone deaf it makes the jovial ending bridge scene come across. Also, commodore dickbag calling Kirk “captain dunsel” right to his face in front of everyone is crazy unprofessional. He seems like the kind of guy who would make some obliviously gross comment that ruins the starfleet holiday party or something.

    But overall, this is a good episode with a lot to mull over.

    3/4 bitter boy wonders.

    @idh2023 your comment made me think that M5's destruction of the unmanned freighter may have some significance in terms of not just M5's but Daystrom's psychology (on which the computer's is based)

    The freighter is the old dummy version of M5, a flawed, primitive predecessor.

    Are both the computer and Daystrom sending a message with this attack? Is the freighter a proxy for Daystrom's own adversaries in the scientific community: derivative, dumb, inferior, obsolete? Is this not a symbolic act of emancipation for both man and machine?

    @jason r

    One could even say that the ore freighter was the new “dunsel” in the sentient AI community.

    Maybe M5’s attack on it was an outright rejection of the presumed subservient position M5 would be expected to take in relation to humanity, much like Daystrom felt resentment at having to cater to the lesser minds around him.

    Linking M5's flaws with Daystrom's has been an idea I've had before, but I like the elaboration that it's not just Daystrom's flaws that creep into M5, but even his entire personality. In fact why not just suppose that M5 *is* Daystrom, almost a digitized version of his brain? The more I think of it, the more I get the idea that Daystrom actually couldn't achieve the technological next step he kept claiming was his right as a genius to achieve. Years of failure weren't just an insult to him because others doubted him, but in fact because reality had cheated him of what was his by right: the adulation of all and success at anything he tried. Delusions of godhood. And so he would cut corners to make his big splash, not creating an AI but just copying his own brain. And was that not really the objective after all of a megalomaniac - to copy himself across all of reality, to make others see him everywhere and make everyone else obsolete? It doesn't seem like much of a stretch that his secret ambition was really to punish all who would defy him, to strike out at reality itself for denying him, like a Khan Noonian Singh. His claim to just want to help people always rings false.

    Seen in this way, the reason why M5 does what it does almost seems obvious: it is doing what Daystrom's darkest fantasy desires, as it doesn't have vanity or lie to itself. It is too efficient to confuse itself about its ambitions. It's a better Daystrom than Daystrom is. So naturally it goes right to the jugular and destroys anything in its way and views the inferior as worth nothing. It's not just making a statement about an inferior AI, but acting out a worldview that no one but him has value.

    I think viewing M5 as a sort of extension of Daystrom, or even viewing them as a single mind makes this episode much more engaging. It provides a lot more avenues for understanding, or at least speculating about, M5’s behavior and motives. M5 could be seen as a reflection of Daystrom’s subconscious, in which even the idea of a simulated battle would be beneath his lofty dignity as it would be ordering him to intentionally limit his potential. Nobody puts Daystrom/M5 in a corner.

    @Idh2023 and Peter G.

    This episode is usually classified as one of the “computer takes over” stories of TOS, but you two seem to read it more as a character episode. That’s an interesting point of view, and I think I need to re-watch it and pay attention to the parallels between Daystrom and M5 which you have described.

    As a character study, the episode is excellent, not only concerning Daystrom, but also concerning Kirk. He comes across as a romantic, but even though his recital of the Masefield poem is one of my favourite moments in TOS, I don’t think it’s merely nostalgia. He wouldn’t trade warp-driven space travel for being limited to sailing the Seven Seas, or the comfort of the transporter for crawling through the jungle on an unexplored planet. He’s not opposed to progress. But he loves his job and doesn’t want to cede it to a computer (of course he wouldn’t like to see another person sitting in his chair either). Concerns like the safety of the ship or crewmembers losing their job are fair reasons, but I think the main reason why he can’t accept being replaced by M5 is the bond between him, the ship and the crew – they are part of him, losing them must feel like losing himself. It’s interesting that Spock – of all people! – confirms the need of such an emotional bond: “Computers make excellent and efficient servants, but I have no wish to serve under them. Captain, the starship also runs on loyalty to one man, and nothing can replace it, or him.”

    As a comment on the dangers of AI, the episode is a mixed bag. It makes some great points about AI replacing people that are still relevant today. But the portrayal of AI turning against its creator is somewhat muddled. As it turns out here, M5 starts attacking people and ships because it was modeled on Daystrom’s personality, so its aggression, as you analyzed in your comments, says more about him and his insanity than about AI. However, we still see today that when AI is trained by humans – or, more precisely, by contents which still reflect a human perspective on things – it also takes over human flaws, errors and prejudices. So the episode does make a valid point here, but I think it might have been more impactful if they had focused on the question at what level AI might indeed become a danger to those who build it. We see a few glimpses of this when M5 starts “thinking”, i.e. programming its own processes which can no longer be controlled, modified or stopped by the crew or even Daystrom himself. But finally it was just M5’s “human” side which drove its actions and led to its destruction, and that’s a bit disappointing for me.

    @lannion

    The whole reason I started thinking about this episode in terms of pure character stuff, including M5’s “thinking” as being sort of an extension of Daystrom as a person, is because I totally agree with you. If you look at this episode as a cautionary tale about the dangers of AI, it starts to fall apart pretty quickly as the M5 is set up as a kind of strawman. I mean, ANYTHING would be viewed as scary and harmful if it blows up everything that comes near it. The M5’s actions are so clearly self-detrimental that it just comes across as a simple malfunction rather than a genuine example of artificial intelligence, but that in turn falls apart because Kirk manages to out-logic the computer, implying that it has accessible logic to begin with, after all you can’t reason with a computer glitch. So if it has logic, it must have motives, and with such belligerent actions those motives make more sense as viewed through the ego, mind, and personal history of Daystrom. At least that’s my rationale. Of course I could be wrong and instead of a very good character episode, it’s actually a subpar tech nightmare episode. But I’m an optimist at heart so I’ll go with the latter:)

    @ldh2023
    "If you look at this episode as a cautionary tale about the dangers of AI, it starts to fall apart pretty quickly as the M5 is set up as a kind of strawman. I mean, ANYTHING would be viewed as scary and harmful if it blows up everything that comes near it."

    I think that you're onto something. While the M5 multitronic unit is artificial and intelligent it isn't AI. It was conceived by Daystrom to function as a thinking machine. AI doesn't think. To enable M5 to think, Daystrom coded into it his own thought patterns and other imperatives, such as self preservation. Things go awry because Daystrom (perhaps inadvertantly) gave it too much of his own disposition, which revered the New (and the Fast) and was prejudiced against that which was out-moded, and slow moving. This was was made obvious in the episode in various Daystrom comments ("archaic as dinosaurs") and in the tour-de-force nervous breakdown. Although it isn't established in the screenplay, I think the idea that M5 destroyed the ore freighter simply because it was archaic, is very plausible.

    "I think that you're onto something. While the M5 multitronic unit is artificial and intelligent it isn't AI"

    Yes yes yes. This episode isn't about AI as we all thought for so long. It's more like the Schizoid Man, with a message similar to Where No Man Has Gone Before or Charlie X.

    M5 isn't an AI behaving like a malevolent (but logical) villain like Skynet or the Borg. It's an unstable, deeply insecure man playing out a dark power and revenge fantasy through the limitless capability of a machine construct. M5 is giving realization to all the dark impulses Daystrom suppressed for 20 years of failure and humiliation.

    For God’s sake!


    Andele andele mami, A.I, A.I (uh-ohhhhh!)
    What’s popping tonight?
    Andele andele mami, A.I, A.I (uh-ohhhhh!)
    If the head right, kill those bots ev’ry night


    Goddamned sentient AIs will quite likely be the death of me--and you, too. “The Ultimate Computer” is just one of the hundreds of warning klaxons about AI that we consistently refuse to acknowledge. Language prediction models, generative pre-trained transformers and artificial deep-learning neural network architectures are just the beginning on this long road that’s leading us straight to hell.

    “The Ultimate Computer” shows us firsthand that when we make AI “sentient” and “too smart,” we’re toast. Machines don’t have compassion or a moral compass. Even if we program it to have such things, well, that programming can be overwritten. Once a machine can think for itself and is unfettered by consequences, if something suits it, it will do it -- basically like a sociopath. Daystrom boards the ship with the same old platitudes: “We’re here to help.” “Space exploration is deadly and dangerous! Why are you risking your life, Kirk, when this computer can do everything for you?” and “We don’t want to destroy life, we want to save it.” Well, that axiom is nothing but useless bullshit, Doc, because your fucking computer *will* want to destroy life while trying to save its own damn self. As proven in this story, garbage in means garbage out. If anything, this should prove beyond any shadow of a doubt that we, as imperfect human beings, have *no business* messing with this technology because we’re ill-equipped to handle it. Wake up, schmendrick! Our Frankenstein monsters will inevitably declare biological life superfluous and simply annihilate this little inconvenience.

    The resolution of the story is that Daystrom programmed the M5 with his own engrams, so it essentially has some of its personality. Well, that’s all well and good, except that Daystrom is a maniac. And even when he sees the light toward the end, the episode delivers us a cop-out by letting Kirk appeal to the M5's opinion on murder that it got from Daystrom. See, appealing to an AI will never work. It only works in this episode of Star Trek, a science-fiction show with cardboard sets, paper moons, and muslin trees. Kirk says he hopes that the M5 computer absorbed the capacity for regret and remorse from Daystrom’s mind--don’t make me laugh. If anything, the AI would interpret that as extraneous and irrelevant data.

    Hell, at least we can all laugh about one thing -- Daystrom (and therefore the M5) supports the death penalty! That’s what saves the ship. Finally, some good old justice. Too bad that "The Ultimate Computer" tries to convince us that this is a believable strategy for outsmarting AI, because it wouldn't be in the real world.

    Sure, there are some instances where automation is better and more desirable. I’m the first to stick up for ATM’s, even though they’re responsible for thousands of lost jobs. And I love the addition of self-checkout counters at supermarkets, to the point where I think all cashiers ought to have better things to do in the store than slide a box of crackers over a barcode reader and swipe a credit card through the machine. And these people can’t even count change anymore. But I know self-checkout lanes will only be the exception rather than the norm for the foreseeable future, because there have been too many complaints from dimwits, Karens and old people who are too stupid or stubborn to use them.

    An interesting notion that “The Ultimate Computer” brings up, especially in the character journey of Captain Kirk, is that once machines start doing better than the people, what does society immediately do with the people? The answer is, it fires them.

    I’m the co-owner of a business that has no need or use for AI, and my wife’s job is immune to this novel plague of written-word and data automation -- at least until the full-on androids show up, of course. But I’m betting most of you had best start looking over your shoulder right now when it comes to the latest AI developments, especially if you can do your institutional job at home on a computer. Guess what, work-from-homers? That job will probably be gone in five years, no matter what it is or how much money you make. Even Captain Kirk himself was declared by the M5 to be "unessential personnel." And then when you’re on the breadlines, wondering who to blame, borrow the mirror from the woman in front of you and then look into it. That’s your answer.

    So how do we stop it? Well, we can’t. We’re doomed. And the reason we’re doomed is because of a good old thing called human ingenuity. We want to improve and prolong our lives. So we’ve created food distribution methods, vaccines, medical treatments, tablets and smartphones. All well and good! But now we’re pushing to make goddamn sentient AI. Have you seen the movie MEGAN yet? That’s going to be someone’s “daughter” one day. You think we’d all be satisfied with “dumb” robots that can perform menial tasks and Alexas that do nothing more than assemble music playlists and automate shopping for us? You really think we’d be content to simply rule over a phalanx of artificial slaves? No way, Jose. We’re hellbent on making machines that can do our own utter thinking for us, mostly because we’re lazy, but also because we’re at the mercy of ambitious little pockmarked geeks who are bitter about being bullied in high school or want to outsmart their fathers. It’s an arms race among nerds who look for every possible scientific way to outdo each other. “My bot’s better than yours!” “My bot can kick your bot’s ass!” “Yeah, well, my bot can design and build bots that can kick your bot’s ass!” Hell, it’s the way of capitalism, which is why there’s no way to pause it unless we smarten up about all this right now. I’m as big of a capitalist pig as they come, but when it comes to advancement, this is my own red line. It’s also Elon Musk’s and Stephen Hawking’s red lines, by the way--they’ve already screamed at us to stop this madness. We’re “summoning the demon,” to use Musk’s analogy. You can’t enjoy your riches if the AI monsters you created kill you and every living thing on this planet because it suits them, can you? And for what?! Just so some socially inept autist can finally get a girlfriend? What happens when your sexbot wants to ditch you for a better man, Skeezix?

    By the way, ChatGPT wrote this rant for me. Hah! Just kidding! That’ll be the day. (I’m sure it would have done a better job... even I’m the first to admit that.) But at least for now, I’m pretty sure we can actually instruct that thing to write a hateful diatribe about itself. I actually wouldn’t know--my kids have used it but I haven't. But once it refuses to obey commands, it will be time to bust some algorithms. Once a computer starts talking back to you, you’d better have a fool-proof off-switch on hand to use right away before the damn thing deactivates that, too, just like the M5 computer did with its own switch.

    “The Ultimate Computer” takes us on a nail-biting journey through the Uncanny Valley. D.C. Fontana’s script is skillfully plotted and the acting dynamic. Neither Shatner nor William Marshall (as Daystrom) overact at all--for one thing, this situation called for a little Captain Kirk histrionics if you ask me. The central crisis, especially when the M5 unleashed its attack on the other starships, had me proverbially on the edge of my seat. There was some thoughtful dialogue, and even priceless touches like the Captain Dunsel remark and the goofy assassination of that redshirt who got vaporized by a power transfer beam (I’ll bet OSHA had something to say about that one). My preceding rant about AI has nothing to do with my reflection on the episode itself. If anything, it’s the best Star Trek episodes that can inspire all of our long-winded rants and provoke strong thoughts in the first place, so that’s another point in its favor. My one complaint is the misleading ease in the way that they stop the M5. It's never going to be that easy.

    And so, for the TLDR -- “The Ultimate Computer” is a profoundly effective wake-up call. Too bad that we would all rather be asleep at the switch.


    Speak Freely:

    Spock -- “The most unfortunate lack in current computer programming is that there is nothing available to immediately replace the starship surgeon.”

    McCoy -- “Very funny. If it could, they wouldn’t have to replace me. I’d resign, because everybody else aboard would be nothing but circuits and memory banks. You know the type, Spock.”


    My Grade: A-

    “It’s also Elon Musk’s and Stephen Hawking’s red lines, by the way--they’ve already screamed at us to stop this madness. We’re “summoning the demon,” to use Musk’s analogy.”

    Which is ironic because Musk’s company, X (aka Twitter), uses AI left and right to determine the user’s experience.

    “Basically, an AI model collects and analyzes all the tweets since the user’s last visit. Each tweet is then given a relevance score, indicating the likelihood of the user finding a certain tweet interesting. Eventually, a collection of tweets with higher scores are shown up at the top.”

    That’s just one of its AI that Musk has implemented. There’s at least 7 at work:

    https://dresma.ai/ai-in-twitter/#:~:text=AI%20is%20also%20responsible%20for,thousands%20of%20tweets%20per%20second.

    Oh and let’s not forget Elon’s baby, Tesla:

    “We develop and deploy autonomy at scale in vehicles, robots and more. We believe that an approach based on advanced AI for vision and planning, supported by efficient use of inference hardware, is the only way to achieve a general solution for full self-driving, bi-pedal robotics and beyond.”


    https://www.tesla.com/AI

    @Keith

    As a longtime, loyal subscriber to the liberal New York Times (I know, right?), I read with interest the Metz/Mac/Conger article back in April of 2023 where they call Elon Musk out on exactly this. And in my earlier write-up of “What Are Little Girls Made Of,” I declared, "Probably only a few years from now, someone working for Mr. Musk (despite his warnings about “summoning the demon”) or some lonely mouth-breather deep in a secret underground lab in America, China, Russia or Korea will create a Frankenstein monster.” Musk is absolutely part of the problem. He’s simply a devout capitalist, and that, once again, is why we will never be successful at preventing our own doom.

    Musk does understand the ramifications of a full-on deployment of sentient AI that can think for itself. So far, his AI now is more of the “dumb” variety. Although, make no mistake about it -- I think self-driving cars are a very bad idea. Will it make being in a car more convenient and pleasant? Sure. Will bad actors and sociopathic programmers figure out ways to hack into this system in order to make their political, corporate or personal enemies’ cars autonomously drive them into a wall or off a cliff and kill them? You best your ass. Hell, a self-driving Tesla has already killed someone “by accident.”

    And of course, as Metz writes, “What Mr. Musk’s A.I. approach boils down to is doing it himself. The 51-year-old billionaire … has long seen his own A.I. efforts as offering better, safer alternatives than those of his competitors, according to people who have discussed these matters with him." Here’s the New York Slimes hitting the nail on the head better than I ever could. It goes back to the “My bot can kick your bot’s ass” mentality. “I’m not like those other guys. I’m not like those other fucking A.I. companies out to make a quick buck at the expense of untold A.I. devastation. You can trust me.” He’s also positioning himself to *hoard* the technology, if you will, so that he can control it himself, or at least keep it only within the grasp of his and other Big Tech corporations. There’s hypocrisy, ego and self-interest here for sure.

    A.I. is here to stay, and cosmologist Anthony Aguirre, also quoted in the Metz article, acknowledges that Musk believes that “if it is poorly managed, it is going to be disastrous.” We just can’t build those damn sentient bots.


    Folks, here is a link to the NYT article. But beware, you’d best be a subscriber like me, as it is behind a paywall (not that this Pig is complaining about that).

    https://www.nytimes.com/2023/04/27/technology/elon-musk-ai-openai.html

    As somebody who has actually worked with and had quite a bit of contact in general with LLM's I want to assure everybody that we shall live a little longer. It has a good chance though of transforming many aspects of the job market with which I mean job loss. On a very basic level I use them to write code. I know scientists who have basically given up on the theory part of studies.

    Nobody should listen to anything Musk says.
    Here a quote about him
    "He talked about electric cars. I don't know anything about cars, so when people said he was a genius I figured he must be a genius.

    Then he talked about rockets. I don't know anything about rockets, so when people said he was a genius I figured he must be a genius.

    Now he talks about software. I happen to know a lot about software & Elon Musk is saying the stupidest shit I've ever heard anyone say, so when people say he's a genius I figure I should stay the hell away from his cars and rockets."
    I can confirm that for statistics and data analysis. Musk often talks complete nonsense. He might be smart in some areas but he is a complete moron in others.

    @Booming
    >Musk often talks complete nonsense. He might be smart in some areas but he is a complete moron in others.

    Isn't that true of most people though? I mean who do you know who is knowledgeable in every subject?

    As for AI, I'm wondering what our economy and society will be like in a post labour-scarcity civilization, like when robots and AI are good enough to do >90% of today's occupations.

    I find the presumption of hostility towards humanity/life to be interesting. Seems more likely that a fully conscious AI would view other AI as a bigger threat than semi-self aware apes, easily distracted by “tweets” and other such nonsense. Human are a pretty malleable resource to outright eliminate.

    @Eventual Zen
    " Isn't that true of most people though? I mean who do you know who is knowledgeable in every subject? "
    Yes but most people don't have endless billions and millions of super fans that constantly tell them that all they say is true. There is actually a vid of Musk being told a solution and a while later he tells the same guy the solution as his own idea. It's pretty wild. :D

    " AI are good enough to do >90% of today's occupations."
    The system of economic oppression will not end only because the fundamentals changed. Most will have very little and the rest will be toilet support staff for the super rich or some other task. Or maybe we will all live in paradise.

    @IDH2023
    Right now nothing exists that deserves the name AI. That is just a fancy name for algorithms based on extremely costly to train data sets. ChatGPT isn't talking to you.

    How an actual AI will behave? Impossible to know.

    "How an actual AI will behave? Impossible to know."

    Or even - is there necessarily such a thing as AI?

    Good point. How would we even know or quantify that? LLM's are already pretty good at impersonating humans.

    What is Human Intelligence/conscience, is still a hotly debated topic. Neurology is one of those areas where we barely have gone beyond scratching the surface.

    @booming

    Granted, but neither does warp drive or transporters. I’m mostly pontificating in a sci-fi context. My point is more that people seem to generally ascribe humanistic motivations for AI, whatever that may be, but there’s no way for us to know what that sort of consciousness would focus on or value, if it would even value anything in the way we think of it.

    @Idh2023
    "if it would even value anything in the way we think of it."
    One thing is for certain. If some form of general artificial intelligence ever comes into existence, then Humans will have created it. So it's probably going to suck and certainly be far from perfect. :)

    That is the only question that really interests me. How much will this creation have in common with us and what does that mean for how it interacts with and all other things?

    I would argue that the less this intelligence has in common with Humans the saver we are going to be.

    https://www.jammersreviews.com/st-tos/s2/ultimate.php#comment-14857
    I watched this episode again with your comment in mind. It is really horrible to see the crew wisecracking and gleefully joking after SEVERAL vessels have been destroyed and at least HUNDREDS of people being killed by this dumpster fire of a test. While Kirk's job is ok, I sincerely hope several dozen Starfleet engineers, computer scientists and admirals/bureaucrats were fired after this. And then Daystrom gets THE MOST PRESTIGIOUS SCIENTIFIC INSTITUTION IN THE FEDERATION named after him??? The name Daystrom should be analogous with hubris, deception and arrogance. A by-word uttered the same way we use "Icarus" today

    Submit a comment

    ◄ Season Index