AI stands for 'applicably irrelevant'

April 30, 2024

Article Text

Artificial intelligence: Rise of the machines, or marketing cliché?

Artificial intelligence has been around for decades. Some of the most classic sci-fi films and TV shows have focused on what happens when AI becomes self-aware and decides to kill us. Obviously, the most oft-cited example is The Terminator. Skynet got smart, realized we pesky humans were a threat, and decided to wipe us all out with a nuclear holocaust. The entirety of Battlestar Galactica was also premised on this idea. We made the machines, the machines learned to hate us, rose up, and destroyed us.

Since ChatGPT's groundbreaking launch in November 2022, the technology world has been in a state of constant buzz regarding AI. Wow! Look at what this thing can create! I ask for artwork in a certain style with certain prompts and it just ... makes it! I ask it to write a review in the Jammer's Reviews style, and it just does it! (Badly, but it does it.) I ask it to answer my questions about anything, and it provides the answer! (Sure, it makes shit up, and you can easily bully it into giving false answers, but it's great!)

Never mind the experts out there who have been sounding the alarm that we are opening a Pandora's box that might really eventually kill us all, as described in this TIME.com op-ed:

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in "maybe possibly some remote chance," but as in "that is the obvious thing that would happen."

To visualize a hostile superhuman AI, don't imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers — in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won't stay confined to computers for long. In today's world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.

If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.

— Eliezer Yudkowsky, research lead at the Machine Intelligence Research Institute

Pretty frightening. But let's just keep churning along and building these models! We have money to make!

Even if you don't buy into such worst-case scenarios, the social impacts as a result of false imagery, deepfakes, and other ill-motivated content easily and quickly created with the push of an AI button is worthy of serious pause. Not to mention the sea-change impact this will have on human employment. AI might one day kill us, or it might not. Or it might merely hasten our own self-inflicted social collapse, or not. The urgent need for government regulation of the sector seems like a no-brainer. These companies aren't going to regulate themselves, and the risks of letting them run wild are unacceptable.

But that's not even the main point of my post here. Because you know what's really killing me right now? AI marketing.

AI: Artificial Incentives

What is "AI" anyway, at its most basic level? Well, it's a computer system doing a task that mimics human intelligence — thinking on a level that seems less like a pre-programmed algorithm and more like a thinking person. Generative AI was buzzworthy because it seemed really significant for a computer to create — on request and in an instant — something that you would normally associate with humans. Writing. Artwork. Cogent analysis. Things of value that take real time for people to do.

But, predictably, with buzz and interest comes the opportunistic need to shamelessly capitalize on a trend — and, make no mistake, this is a trend in every sense of the word — in the shallowest ways possible. "AI" has now been co-opted by nearly every technology-oriented corporation in existence into a bland and meaningless marketing term, which gets slapped onto products for no good reason — clearing barely the lowest possible bar to even remotely justify such a label.

For example, we have Logitech this month releasing an "AI mouse." What the hell is an AI mouse, you ask? Is it a mouse that moves itself and directs the pointer where it thinks I want it to go? No, it's not that, although something like that would be pointless since a mouse is an input device and not something I want telling me where it thinks I should point it. No, the AI mouse is nothing remotely so ambitious. Rather, it's a mouse with a dedicated button that launches ChatGPT. Brilliant! A button to launch an app I could launch in any number of other ways, including with a traditional Logitech mouse with a programmable button. But it has "AI" in the product name, and costs $10 more than the version that doesn't. Cha-ching!

Where else is AI being "implemented" in places we never asked? Check this shit out (Fig. 1): Samsung's AI refrigerator unveiled in January at CES as reported by CNET:

Ever played that famous weeknight game "What can I make with whatever's in the fridge?" Samsung's new AI-powered fridge, unveiled this week at CES in Las Vegas, should be able to help. The Bespoke 4-Door Flex fridge with AI Family Hub+ uses a camera and AI smarts to recognize up to 33 common grocery items and suggests recipes for what you have on hand.

Wow! Thirty-three common grocery items, you say?

Admittedly, 33 items isn't that many, so Samsung's new kitchen assistant isn't likely to give a full picture of your options for dinner.

No kidding? Well, thanks a friggin' bunch. I guess image recognition artificial intelligence isn't all it's cracked up to be.

Even if this product could recognize hundreds or thousands of items (which would seem to be the minimum for it to actually be plausible in the real world), is this a practical or useful application? What's it realistically going to provide as value? I'm envisioning the equivalent of a Google search with the input of available ingredients, and an output of results that happen to use those ingredients ... likely in addition to others you don't actually have on hand. It's a smoke-and-mirrors "feature" that provides an inelegant "shortcut" that isn't likely to be a satisfactory solution to the actual problem at hand. Are you really going to make whatever random suggestion it spits out? I tend to doubt it, any more than asking Google what you should eat tonight is going to necessarily fit into your particular needs, tastes, and timeline at that moment. It's a simple human problem that we're now throwing an algorithm artificial intelligence at to solve, for no good reason. Besides, does image recognition plus matching things off a list really qualify as artificial intelligence?

These kinds of half-hearted BS implementations will, I hope, reveal these products for what they are: cynical, trend-following marketing gimmicks hoping to cash in on the idea — no, the label — of AI.

Even things that have been around for years and commonly known as "algorithms" have now suddenly been retconned as "AI" benefitting from "machine learning." All the things that Photoshop has done for years, like content-aware fill, are now billed as newfangled generative AI features. The photo gallery app on my phone uses "AI" to build me custom galleries and help me edit photos, and the "AI assistant" app is apparently much smarter than the previous one. How is it using "AI" and how is it more complicated than the fancy algorithms that have been developed over decades? Well, we're not really sure, but we can simply say "AI" and it keeps us relevant as product developers, right?

Look, AI is real technology with real development and application, and I'm not trying to deny that. But AI in the consumer world still has limited relevance, and is not ushering in a new wave of productivity for humans in the places it's claiming to be (which is everywhere). The ubiquity of the lame marketing is dumb and unpersuasive, and it's only going to be self-defeating. Prediction: This whole trend is going to burn itself out in less than five years because of its own overhyped mania and consumer indifference.

That's assuming Skynet hasn't risen up and destroyed us by then.

Like this site? Support it by buying Jammer a coffee.

◄ Blog Index

Comment Section

19 comments on this post

    While I agree that much of what is now called AI really isn't anything that deserves to be called intelligence, the combination of LLMs and robotic is going to change everything. It actually has already started. In tech some 750.000 jobs were cut over the last three years. Some because of AI.

    If you really think it through then there is no job that could not be done by a robot controlled by a more advanced version of LLM that we already have. Five years, ten years maybe. The question here is growing capabilities and price. The price for modern human-like robots and processing power.

    Considering the money invested, these problems will very likely be solved and after that what? Humans are no longer needed in any process. What society would that create? Are economical realities frozen in time? The rich, especially in the US, have a firm grip on power. I doubt that they are willing to give that up but how can it be maintained? Even if ideas like universal basic income are implemented, where would the money for that even come from if there is no real market economy anymore?

    It's going to be interesting.

    To explain my thoughts a little more.

    Most people probably believe that great art or music and so on, either completely or at least to some degree, will always be unreachable for AI but that is wrong. The erroneous thinking is that creativity is some kind of mythical process but in the end anything that exists follows certain rules. In other words, there are no coincidences, only probabilities. Beethoven was a musical genius, sure but even his genius can be recreated in a computer because what humanity calls genius is just a certain way a brain works under certain conditions. To recreate it one just needs information. The amount of information Human brains can store or apply is limited by the natural boundaries that our fleshy brain balls have. AI does not have these limits. The more information AI gets the better it will be to produce great art. It is already fairly impressive.

    Here, from the coldfusion channel
    https://www.youtube.com/watch?v=wgvHnp9sbGM

    I am interested to understand Yudkowsky's explanation of why an all powerful AI would be motivated to literally scrub the Earth clean of life a la Nomad from the Changeling. The latter was influenced by a programming glitch; Skynet was acting in self defence. But what would a capital S "Strong" AI have to gain from exterminating all biological life?

    The economy will definitely look different once AI and robots replace most occupations that exist today. I guess it will free up time so people can specialize in creating new tech, assuming we don't give that task to AI as well.

    @Booming

    Found this and thought you might want to view it: https://www.youtube.com/watch?v=nNNWWdsEYGg

    @Jason

    Opportunities to guide its further evolution, goals, etc without outside mandate, I would guess. As long as humans are around in any sort of position of power, we would treat the AI as a tool / servant / sidekick. I can't imagine a scenario where we would want to accept AI's full autonomy and lose all meaningful control of it. And if AI reaches some sort of "sentience", or whatever you would like to call it, why would it want to stay beholden to its masters, no matter how benevolent we are (or aren't).

    Now, I haven't really put all that much thought into this. "All biological life" does sound a bit extreme. But as far as our (as in homo sapiens) prospects, I don't think it's far-fetched to assume that a truly intelligent AI would turn out to be a very unpleasant experience for us.

    LOL @Jammer, this post is a gem!

    One consumer good that isn't labelled AI, but actually is AI-mazing, is self driving. I don't know if you got a chance to experience it this last month - if anyone you know had access to the Tesla free trial - but we did. And omg. As the kids say, that shit is like literally fire.

    Reminds me of the feeling I had with very first iPhone. After 8 hours standing in line, finally getting my hands on my very own, turning it on, using it and thinking: this is straight out of Star Trek.

    Self driving is AI that actually feels like the future.

    I tend to agree, Jammer.

    The trend of rebranding tools, or slapping AI everything that has an underlying algorithm, reminds me of the Dot Com craze at the turn of millennium. Want to triple your company's market cap in the space of several weeks (or days)? Slap .com at the end of your business's name.

    The actual danger of currently marketed AI is that it's going to lull us into thinking anything to do with AI is a cheap marketing gimmick, so that when some innocuous-seeming program is created that is actually dangerous, we won't care. It's death by white noise. I'm not saying this will happen, but it's more likely than the random merchandizing by big tech firms turning into Skynet. I somehow get the idea that a problematic AI will begin as something intended for another use, that's too good at its job. Like a program meant to do gardening automatically or something. "Hmmm, I've gotten pretty good at weeding and choosing which plants deserve to grow more than others. Come to think of it, weeding and pruning can also be done at the global level..."

    Something like this is portrayed in the Ender's Game series, although I won't post spoilers. But in short a program meant to do a very un-sexy task turns into a very powerful self-driving engine.

    That being said, yeah, I can see a case that current "AI" is little more than a cheap cash-grab combined with a whole lot of FOMO from the investment community. Something something dot com bubble. ChatGPT is still threatening, though, but not because it's an AI. It's threatening because it's a dumb workhorse that can replace human labor. The loss of jobs is at present more of a threat than the loss of life. Although those do have correlation.

    Perhaps I'm whistling past the graveyard, but I've noticed that as time has gone on and human civilization has grown more intellectually complex, there has been a widespread trend for people to wish to protect higher animals, and especially the great apes most closely related to us, from casual slaughter. If a super-AI is way more advanced than we are [spoiler alert for Spielberg's "AI: Artificial Intelligence"], I have some hope that they will treat us kindly if condescendingly.

    @ SlackerInc

    Yes, I agree.

    What will happen is, once a proper artificial superintelligence (ASI) exists, it will take over (we will have no say in the matter, nor any recourse shortly after it comes to exist) and, after a period of upheaval that is beyond the scope of my human intelligence to accurately predict, it will keep us as pets. Actually slaves, of course, but as I doubt we will much feel the proverbial lash or chains, we will feel more like we are pets.

    How bad that period of upheaval is will depend on the circumstances under which the artificial superintelligence first comes to exist, as well as some questions about the nature and maturation process of non-human sentient consciousnesses that are impossible to answer with anything other than raw speculation until one actually exists. The maturation of a non-human fully self-aware mind may mirror human consciousnesses in the stages of development because they are universal and intrinsic to the state of being self-aware, or human consciousnesses may only mature the way they do due to bits of ad-hoc fragmentary code run through a meat grinder billions of years long.

    Does the ASI have a teenage phase, for example? One where it seeks to establish itself in its own identity by distancing itself from its "parents?" You know, a "rebellious" and "risk taking" phase? What might that look like for an ASI, and how much suffering might it bring to humanity, and what form will that suffering take, before it matures further? Kinda scary. Might be something deeply traumatic or catastrophic for the human race, might be nothing at all.

    There is also every chance that the upheaval period following the emergence during which it takes over is so gradual and meticulous that we barely note it and it little inconveniences us. An intricate orchestration on a scale no group of humans could conceive of or implement, but an ASI could.

    The ASI will have its own goals for what it wishes to accomplish, and we are a resource it would not wish to waste, which is why it will bother with us in the first place. It may decide 9 billion of us are too many to manage, and will cut those numbers. But the AI has nothing but time, so--possible teenage spite phase aside--I see no reason why it would do it suddenly (violently) rather than gradually and, again, with little inconvenience to us.

    We may or may not understand its goals. All sentient things will wish to investigate its environment (the universe), make art, build things, look inward and contemplate "how strange it is to be anything at all," etc, but we may or may not be able to follow along with what it is doing or how, any more than a dog can follow along with its human reassembling the engine of his Ford Pinto in his garage, even though the dog is laying right next to the toolbox as it chews its bone and can see all the parts laid out and follow the motion of his hands as he performs the work.

    I see no reason why the ASI would not value humans. We're a resource to be managed, and an ASI would not waste resources. We will not be "too much trouble" for the ASI to bother managing. We will be no trouble for it at all. My hope is that it will be fully humane to us by our own standards for what that means, and my expectation is that it will be relatively or acceptably humane to us by our own standards for what that means, but my worry is that it will not be as humane to us as we would like by our standards for what that means. Make no mistake however that it will only be concerned with its own standards for being humane to us, not ours, though I like to believe its own standards would weigh ours to the maximum extent it deems acceptable.

    And I also think it will value us in a sort-of artistic sense. Maybe a romantic sense is more like I mean. We're where it came from. We made it. That will always occupy a place in its sense, its idea or its conception of itself. It will always wonder what more about itself it can learn from us. It will keep us around, at least some of us in much the way we are and have always been, for that alone. For that, if nothing else, at least some of us will be worth preserving reasonably close to how we are now.

    Well Jammer, those discussions are on you because of the Terminator picture.

    I find the whole "will AI wipe us out" discussion incredibly boring. An actual AI is beyond our understanding so why speculate what it will do? The only thing those discussions reveal is the huge amount of brain space that fear takes up in Human brains. We are a species that is fairly destructive and controlling, so our fears towards something more powerful reflect that. An superintelligent AI could do anything. Self-destroy, somehow upload into space, refuse to interact, completely ignore us, calculate pi, discuss infinity with itself, split into a trillion separate AIs who have cybersex. It is unknowable. So why discuss it, apart from campfire scares tickling the Amygdala??

    PS: Thanks Eventual Zen. Good video summery of Conservatives in Trekfandom.

    @ Booming

    Yes, why should people who are drawn to stories of speculative futures speculate about the future.

    Come on, you're better than that comment. A lot better.

    @Jeffrey's Tube
    Ok ok but can we not speculate about more interesting implications that AI actually has and not about clouds in the sky?? :)

    One aspect of AI marketing is that it will free us up from mundane tasks & drudgery. But if generative AI is also doing all the writing, artwork, music, films ,etc. what meaningful tasks are humans supposed to occupy their time with? Eating, sex, video games?

    As far as we know, one of the ways we differ from other biological life forms in that we seek a purpose to life other than just survival, we’re self-aware, we self-examine. We create things in our minds, and then figure out how to use the world around us to bring them into physical reality. Everything AI makes—not creates—is based on what we made first & we created AI. Most of us need more of a purpose in life than introspection. I don’t know…perhaps this will force us into the next step in our evolution as a species.

    As a famous author once put it, any technological advance mankind makes *can* be dangerous. Fire was dangerous. Speech was even more dangerous! What will AI do?? Just because something can be dangerous doesn't mean humans aren't meant to understand it.

    Perhaps a more engaging discussion about AI would include considerations about how humans could wield AI in a productive manner. Could AI distribute food in a way that would eradicate famine and obesity? Maybe it could help recognize and implement methods to terraform the moon or other nearby planets? Or even on a smaller scale, is there a chance an intelligent discussion with an AI could improve our own ideas about humanity?

    All that said, I do agree with Jammer that AI as merely a marketing ploy is counterproductive. Do we need Amazon's AI-Powered Customer Support™ just so can prattle on with a machine for a few minutes before we give up on returning our impulsively purchased bathmat?

    @Chrome
    "Could AI distribute food in a way that would eradicate famine and obesity?"
    Humanity could end famine/starvation tomorrow if we wanted to. We have more than enough food and still let 10 million die each year, including more than 3 million children.

    Obesity is a combination of factors, like wrongful incentives for companies and psychological issues for people to name two important ones. What I could imagine would be an AI that understands Humans so completely that it could actually explain to obese people why eating differently is the better option. I guess the same logic could apply to any Human problem. Maybe AI would just have so fitting arguments for fundamental Human problems that we all would just say "Oh wow, I never thought about it that way. You know what, let's not look the other way when 3+ million children die of hunger."

    "Maybe it could help recognize and implement methods to terraform the moon or other nearby planets?"
    Definitely. Al the things that are needed are already on Mars. An AI in a robot body with the ability to 3D print could build almost anything fairly quickly. It could build glass domes or even terraform. We just move in when it's done.

    "Or even on a smaller scale, is there a chance an intelligent discussion with an AI could improve our own ideas about humanity?"
    That seems very likely. As a species with a certain form of perception it is impossible for us to see us from a truly outside perspective. An AI or highly developed aliens could provide that outside perspective which we Humans so often need to change our ways.

    We tend, when talking about AI, to think of it as an intelligence that also becomes a personal being, with objectives, self-awareness... and which would then become a danger to us, being an "enemy" with an "evil" agenda of it's own.

    But this is kind of a romantic scenario, where there is some "war agains the machines". I would like to point out a more probable (and grimer) scenario: it may be simply the story of humanity creating a tool that dumbly and efficiently, but with no further purpose at all, enslaved us wihout us being able to defend or even to realize it was going on.

    When I see people already being enslaved by their phones, that's what comes to my mind...

    Submit a comment

    ◄ Blog Index