Comment Stream

Search and bookmark options Close
Search for:
Search by:
Clear bookmark | How bookmarks work
Note: Bookmarks are ignored for all search results

Total Found: 70,943 (Showing 1-25)

Next ►Page 1 of 2,838
Set Bookmark
Jamie Mann
Fri, Apr 3, 2020, 3:28pm (UTC -5)
Re: VOY S1: The Cloud

I'm not sure when, but at some point I came up with a simple way to score Voyager episodes.

It may well have been this episode.

Holodecks? Check.
Annoying historical setting and deliberately cliched characters within the Holodeck? Check.
Bonus implied use of holodeck characters for sexual activities? Bonus check!
Implausible technological issues? Check.
Blatant and deliberate misunderstanding of astrophysics? Check.
Cliched native-american pseudo-mysticism as an alternative to even a holographic counciller? Check.
And finally: Neelix? Check.

Overall, I make that 7 points deducted. And there's little or nothing to balance them out.

Were the writers really this hard up for ideas?

Why do we have a ship that's desperately low on resources, but still has enough power to fuel the Holodeck (as it's a "different" kind of power)?

Why are we subjected to the hologram of a French dive bar featuring characters which could only be more cliched if they were waving a French flag while wearing a beret and showing off the latest style in garlic-bulb necklaces?

Why does Voyager (and to be grudgingly fair, DS9 as well) insist on making nebulas dense clouds of gas? In the real world, a nebula might be a gigantic cloud of gas, but it's still lower density than the best vacuum that can be formed on Earth.

As such, they're literally invisible when viewed up close. And if they were any denser, Voyager would tear itself apart when trying to to pass through them at any appreciable fraction of the speed of light, regardless of how good it's particle shielding is. Because as portrayed in Voyager, a nebula features practically atmospheric pressure levels!

Then there's Chakotay's spirit animal mumbo jumbo. Frankly, the concept as shown in this episode owes more to new-age Californian mysticism, by way of Victorian spiritualism) than any actual Native American tradition.

And yeah. Neelix. The court jester, settling into his role as a secondary character who provides light relief. So much for a "breakout" character!

Equally, if Voyager has all this power spare for the holodeck, why not spin up a virtual councillor in much the same way as the good Doctor? People such as Freud and Leonardo Da Vinci are in Voyager's databanks, so why not have the system generate a 24th century therapist? If nothing else, it could have been a perfect way to pull in Deanna Troi as a recurring cameo, and given the series an opportunity to explore the dynamics of how two separate holographic characters could evolve over the course of the series.

Still, Voyager was rarely anything other than the king of missed opportunities...
Set Bookmark
Jamie Mann
Fri, Apr 3, 2020, 2:32pm (UTC -5)
Re: VOY S1: Phage

At last, it's time for a new alien species! What wonders will we behold?

Sadly, there's not much to celebrate.

The new aliens are a cross betwen Frankenstein and his monster: aliens suffering from an uncurable degenerative disease, which they have addressed by stealing body parts from other species with their advanced medical technology.

Sorry. What?

This species has medical technology which is superior to the Federations. In fact, taken in combination with their holographic and shielding technology, they're generally more technologically advanced than the Federation.

So how are they using this technology? They roam the galaxy, looking for sentient beings they can butcher.

Instead of, say, implementing cloning technology. As per the TNG episode Up The Long Ladder, Humanity had access to reliable cloning technology at least 300 years ago (i.e. pre-Federation), so why is this species not using their radically more advanced technology to produce cloned body parts?

Even if they can't use their own DNA due to the plague, they could trade for DNA from other species, and the aforementioned Mariposa colony managed to last nearly 300 years without any infusions of new DNA.

Alternatively, they could use non-sentient creatures. Or use their advanced technology to replace affected body parts with cybernetic alternatives. Or...

Basically, there's lots of options for this species to deal with their situation /without/ roaming the quadrant as grave robbers and murderers.

They're monsters, purely for the sake of being monsters. Cheers, writers!

Beyond this, the rest of the episode is pretty weak. Neelix sadly doesn't die, despite having his lungs ripped out. The sub-plot about Dereth's regrets rings very hollow, when you consider how he was responsible for Neelix's sudden organ loss - and just left him to die where he fell after said extraction.

(And we never really get an explanation as to why the Vidiians were sitting inside a camouflaged cave on an empty planet; the only potential explanation I can think of is that the writers were trying to go for some sort of "trapdoor spider" theme, possibly with the dilithium as bait.)

And when Neelix does get a new lung, it's one of Kes's, thanks to the Vidiian's uber-medical technology. Never mind the fact that with her ten-year lifespan, it'll probably fall apart before the end of the season.

But the icing on the cake is that Janeway releases the Vidiian's with little more than a finger-wag and a toothless warning. Despite the fact that they're self-confessed murderers and are guaranteed to kill again.

At this point, I'm starting to lose faith in Voyager's ability to produce a story of any real worth at all...
Set Bookmark
Kevin B
Fri, Apr 3, 2020, 2:15pm (UTC -5)
Re: DS9 S5: Nor the Battle to the Strong

Didn't like this episode of star trek deep space nine. I think the show has much better episodes which are much more relatable then this one. I think the problem this episode suffers from is what much of the series suffers from nowadays when watching it...the war scenes particularly the battle scenes just don't look believable. This show was made before 9/11 happened and the world got used to violent wars and their effects which can now be seen with the click of a button. What star trek deep space nine presents us with as battle scenes almost look like costume theatre by comparison.

Having said all that from a character development point of view DS9 is still the best trek series out there and much better then discovery.
Set Bookmark
Fri, Apr 3, 2020, 2:15pm (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

Anyone agree or disagree with this...

I would have preferred if the big reveal of this super advanced AI that is awaiting a call was more like Lorien from B5 ; a cerebral answer of someone who has evolved beyond even the most advanced beings; than a faceless action figure bent on destroying the universe through a summoning portal.

I think it would have been much more satisfying for it to be misunderstood and actually some advanced lifeform that can teach something and is not an actual threat (that was made from myth and fear).

It ended up making the entire plot worthless to be something so faceless and just shut down so quickly like that.
Set Bookmark
James White
Fri, Apr 3, 2020, 8:57am (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

@Andy's Friend

Thank you for posting something intelligent on the subject of AI and consciousness. I still enjoy reading Chalmers, Dennett, Searle, Churchland, and others. The issue of strong and weak AI, and the hard vs easy problems relating to the same, have been around for a number of decades now. It's nice to see someone paying attention. It's also the sort of knowledge you need to differentiate older Trek, which ponders some of these questions albeit through sometimes incoherent or silly circumstances, and stuff like DSC and Picard which lack any essential desire to confront such subjects (on their own terms) or to recognize the inherent difficulties in even asking the right question (in the first place).

One statement you made was interesting. You said:

"I hope Piletsky's remarks on the necessity of the unconscious for consciousness isn't lost on readers."

Isn't it the evolutionarily driven attributes, like desires and imagination, that at least partly dwell in the unconscious region that give rise to the synchronous cycle you mentioned? The point was that you need this synchronicity to make possible qualia, the internal instances of subjectively experienced consciousness.

The consequence being that if you remove unconsciousness, as you mentioned, you remove that which provokes the mind's cycle toward experiential, subjective consciousness. Yet, at least one of your philosopher sources cited indicates that an adequate substitute for the desires, imaginative drivers, and so forth could be included in the development of a synthetic intelligence, even without an unconscious realm existing. Perhaps as a substrate mechanism or a feedback loop that facilitates the overall intelligence's development.

The point is that a synthetic mind with the qualia Chalmers refers to, in distinguishing strong from weak AI, may not depend on a subconscious state. Maybe it ultimately will, since much of this is still speculative science, but we can't know this at this point. Moreover, and this is the larger point, maybe none of the speculation is warranted, either way, since it still remains unclear whether the conscious, subjective experience itself can be reduced to something that code X within hardware Y can achieve.

In short, this may still be a "hard" philosophical problem, as Data's ambiguous "expression" upon Lal's death exemplifies.
Set Bookmark
Fri, Apr 3, 2020, 8:03am (UTC -5)
Re: TNG S6: Realm of Fear

One thing I hadn't seen anyone consider in the comments is that maybe the size-exaggerated microbes weren't real? It wasn't something I ever considered when I watched this episode the first time, but looking back I began to wonder about. Here's why:

1.) The very first thing that tipped me off to this was actually one of the final things to happen in the episode - when Lt. Barclay grabs one and then tells the others something to the effect "the crew members are in there - you have to grab on and REALLY HOLD ON!" My first thought was 'oh no - he didn't warn them they look like giant microbes! What if it frightens them!' (Wouldn't that frighten anyone, especially if they are expecting to see a person as opposed to an attacking monster?)... but no-one else hesitates and no-one else mentions this being an issue. All three rescuers lock on to their respective humans as if they saw them that way.

2.) Another thing that tipped me off was looking back over the episode in hindsight - where did we pick up the idea about microbes to begin with? It was mentioned by Dr. Crusher who was speculating on what Lt. Barclay might have been seeing. At this point in the episode we don't even know if what Barclay is seeing is real or not and neither does the crew (as an aside, I like how the entire crew is obviously annoyed at being woken in the middle of the night for this anxiety-fueled meeting called by Barclay, but instead of getting angry and/or writing him off, they all decide to take him seriously and help him out with extra work and responsibilities investigating - they know how he gets when he Web MD's himself but no-one judges him harshly for it).

3.) Not long after this, Dr. Crusher discovers ACTUAL alien microbes in his arm - and I think as a result, we all began to make the connection between her earlier speculation that MAYBE he's "seeing" the microbes in the matter stream and microbes that are actually literally physically in his arm - but there's never any hard connection between the two. I think it's natural that we just sort of "bridge the gap" and assume it must all be the same thing because that would make the most sense, right? Even if we don't have hard proof that is what it is.

So, something this got me wondering about: how well can people "see" inside a matter stream? Notwithstanding the previous discussion on how ridiculous it is anyone can see with demateralised eyes (maybe we only "see" the bits before and after materalisation, rather than the entire process?), the vision cannot be as clear as it was on TV, right? That has to be some made-for-TV magick, right? What if Barclay was looking at the missing crew members the entire time but because his vision was so disoriented by the de/re-materalisation process and the overlaying matter stream - coupled with his intense fear of developing psychosis - he never really saw what he was looking at clearly? Remember, the first time he couldn't see clearly what the "thing" was at all - with each transport it became more and more clear to him.

With the first transport (away from the Enterprise), he sees nothing at all. He's more worried about never re-materalising than anything else, because this is, presumably, his very first transport ever in his entire life. Now, knowing he obviously survived because he's still here - his fear changes. Now, with his second transport (back to the Enterprise), he sees something during the de-materalisation process - it's so faint we can't see it, and we see even Barclay squints at it like he's not sure too. It reminded me of when you get those "floaters" in your eye; you know, those little squiggly lines that can sometimes block some of your vision but then you try to look directly at them and they fade away? Anyways, after being de-materalised and seeing that, Barclay has a new fear and it manifests during the re-materalisation process. Now that he has de-materalised away from the Yosemite and he is re-materalising on the Enterprise, he sees that same squiggle more clearly, and it touches his arm. At this point, it is still out of focus (we won't see the "mouth" like opening until his third transport later on in the episode). At this point what he is seeing is more clear than it was initially but less clear than it would be finally. At this point it touches his arm and he becomes infected with the microbes Dr. Crusher will later discover.

What's more likely? That we are literally "seeing" with the naked eye microbes entering Barclay's arm, or that one of the crew members reached out to a fellow human, desperate to escape from this purgatory, and unwittingly transferred the microbes from themselves to Barclay by contact? Remember, Barclay never sees the "thing" clearly, though it does have a skin-like colour and the approximate shape of an arm. What if the person reached out and we just couldn't see their body in the background because of the matter stream? Like a person reaching through an obscuring fog, for example? It's natural that Barclay or the crew would never speculate on this, because a person surviving in a buffer that long is unheard of, and everyone is still thinking of the crew as "missing" rather than "present" right there under their noses. We wouldn't see anyone surviving in a buffer long-term like that until Commander Riker's copy and old Chief Scotty from TOS pops up in later episodes. At this point it is unheard of.

It is only after Lt. Barclay asks Alexa to google up Web for him does he start suspecting he may have transporter psychosis - which as unlikely as it is, is still more likely than the missing crew surviving in the buffer all this time (as normally people can only last a minute or two in it at most), because at least you have cases of psychosis - there are no cases (to my knowledge) of people surviving in a buffer long-term at this point in the Star Trek universe. So now Barclay has had the experience, built it up in his mind, got the computer to confirm it for him, let it fester further in his mind, to the point that he WANTS to take a third trip just to see now, frightened as he is.

In a hilarious scene, he "orders" Chief O'Brien to transport him (despite having earlier been relived of duty), O'Brien doesn't really buy it but goes along with it and this is when we finally see the "mouth" like shadow on the "microbe." When Barclay is in full anxiety mode. By now, we should know something is off (though I admit I didn't until I looked back in hindsight) because microbes don't have mouths. They cannot bite. The way microscopic organisms consume is not the way we consume. But really, think about it for a second - an arm you cannot see but a blurry shape of, plus the fact you've been told this is a microbe on top of your already out-of-control anxiety - could a human arm blurred in a thick fog not look like a giant worm coming to get you? Especially if you are already frightened and you only get a glimpse of it for a few seconds? Remember O'Brien told the others they would get a "bumpy ride" of 4-5 seconds - but by the time Barclay showed up he got the normal ride of 1-3 seconds, take that time frame minus what he couldn't see without eyes between being de- and re-materalised, and he only gets a glance at this "thing." Could a glance at an arm coming out of a fog not look scary? Fingers like teeth, shadow of the palm like a gaping mouth? Can you even have a shadow in a matter stream?

Think about it... why would he grab on to this scary monster trying to get him, when before he recoiled? He only grabs on later, during his FOURTH transport when O'Brien said they would have to suspend him in the stream for FORTY-FIVE SECONDS to rid his body of the microbes. 45 seconds is a long time compared to the mere 1 or 2 he had before... don't believe me? Try holding your breath for 45 seconds and you'll see it feels like forever. Is it possible he was forced to look at this "microbe" thing for so long that he realised he was actually seeing an arm and just instinctively reached out? Afterall, it must have looked VERY different once he got a moment to really look at it as opposed to just glancing at it out of fear.

I can't think of why else he wouldn't've warned the rest of the crew when they also went in to rescue the others, other than that he realised there was no monster at all, it was just his fear distorting his perception. Like the tree branch that scratches your bedroom window at night during every Halloween flick ever. It looks and sounds and feels scary only when you ARE scared. But then mom comes in and turns on the bedroom like and suddenly that tree isn't so scary anymore.

That's the only way I know to take this episode. And that because we saw it from Barclay's perspective, we became just as "frightened" in a way (as in, we saw it the way he saw it, rather than it actually was). I could be wrong, but the episode doesn't make much sense to me otherwise. To me, it has to be either that, or the "monsters" were just merely allegorical for Braga's fear of flying, as it has been said this episode was based on. But Star Trek doesn't do a lot of overt allegory like that, and usually there are some kind of physical explanation for most episodes in general.

All in all, I completely forgot the crew members were even missing until they were grabbed at the end of the episode. Normally I would say that's not very good writing, but in this case they were just merely the B plot and Barclay was the A plot (when normally you'd expect it to be the reverse) - which I don't mind because I actually like Barclay as a character for the most part. Barclay episodes are always fun even if they all are filled with glaring plot holes and wide leaps of logic :)
Set Bookmark
Fri, Apr 3, 2020, 7:41am (UTC -5)
Re: VOY S2: Resolutions

I find this to be a charming episode that shows us just how much a number of these people have come to care for each other in the year and a half that they've been on this ship together. The crew don't want to leave Janeway and Chakotay behind, even a number of the former Maquis. Despite an apparent lack of empathy, even Tuvok is not unaffected by the loss of his friend and he is eventually forced to bow to the wishes of the men and women under his command. I love his step by step battle plan that works exactly as intended. The inclusion of Dr. Denara Pel is a nice callback to earlier in the season, and again shows the value of friendship, as well as reminding us that not all Vidiians are enemies.

I think the treatment of the Janeway/Chakotay relationship is just about exactly right here. They don't cross the line into romance, though the potential is there and it appears that more time on the planet alone together could well have led to that. At the same time, they do form a closer friendship that will continue through the rest of the series. They both seem to genuinely enjoy the enforced "vacation" and almost sorry to return to professionalism at the end of the episode.
Set Bookmark
Fri, Apr 3, 2020, 6:09am (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

Set Bookmark
Jason R.
Fri, Apr 3, 2020, 5:29am (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

The Emissary has spoken.
Set Bookmark
Fri, Apr 3, 2020, 5:14am (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

Again you have reached a point where your debate has become meaningless because none of you has an understanding of the topic that could lead to a deeper understanding for the rest.

Andy's friend post is a good example

It is not a good argument, in a scientific sense it is unacceptable and even if you apply less stringent standards it is not a very convincing one.
- the main problem is certainly the complete lack of leading minds in the field in general.
- Paul Weiss (a philosopher) writes from the "Review of Metaphysic" That name alone let's me go to yellow alert and knowing that it is mostly sponsored by the Catholic church doesn't boost my confidence. The focus of the the peer reviewed philosophy journal is education. - not a good source -

- Pentti Haikonen is former engineer for Nokia and now at the philosophy department as an adjunct professor. He at least seems to have a background in the field but I would not call him a leading expert by any measure.

- Dr. Subhash Chandra Pandey is an assistant professor of computer science at the Birla Insitute of Technology and Science. That is a university that does not make it into the first 1000 places in the THE (Times Higher Education). it is certainly a fine institution but nothing to brag about.

- the last source is Eugene Piletsky: Ph.D., Associate Professor, Taras Shevchenko National University of Kyiv . Another university that doesn't make the top 1000 universities world wide. Well, it is in the Ukraine. But I think we can agree that the people you have quoted are just five voices of people who are either not in the actual field of computer science or faaar away from being leading voices of that field.

Give me Oxford or MIT, I would even accept the barely first rate losers of the ETH Zurich.
Set Bookmark
Tommy D.
Fri, Apr 3, 2020, 4:37am (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2


I think you are conflating two thoughts from the same comment. You asked if current Star Trek could cross a line to where I would say no more, and I said I don't know. This is in regards specifically to the Trek universe, Discovery, Picard, and whatever else is in the pipeline for the future. My second thought about The Orville is independent of that question, as The Orville doesn't fall under the Star Trek universe, so I didn't consider it under the premise of your original question.

As far as making the distinction of Trek/Not Trek regarding The Orville, it is something I say I should have not have said because it does something I dislike, it implies that whatever is "not Trek" or what have you is either bad or not enjoyable. I think thats too binary an outcome for discussing Trek, and that can make those discussions meaningless. But by saying that I fell into the same trap I would usually avoid. So yes, I do feel that was a mistake on my part, because despite my criticisms of it, I do enjoy it for the most part, and its a mistake to imply otherwise.

"Unless you want to argue that ST:P is more Trekkish then the Orville, which I think we'd both agree to be a ridiculous statement."

I won't argue this point, I'll only say I'll be there for both shows 2nd and 3rd seasons, respectively. :)
Set Bookmark
Andy's Friend
Fri, Apr 3, 2020, 4:28am (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2


Thanks for the video, Quincy. It is indeed worthwhile, presenting nothing fundamentally new at this point but presenting what it does well. I recommend it.

I am however not sure that you have understood its implications, as you have systematically argued against its propositions, as late as this week. Have you changed your mind? What is Sarpeshkar arguing?

Sarpeshkar here is doing precisely what my examples above speak of. He is talking of emulating the human body. He is talking about perfecting the artificial, analogue computers of yesteryear—not the digital computers of today—so that they can match the human, natural, biological analogue 'computer' at the quantum level. He is not talking about software at all: he is talking about a fusion of hardware and wetware. He is being the proverbial Dr Soong, talking about the attempt to build an artificial cognitive architecture. He is talking about the proverbial ‘positronic brain’.

He gives numerous examples of this, from the micro to the macro-scale, as in:

SARPESHKAR: ‘(…) but if I copied the clever exponentially tapered architecture of the cochlea, I could build a quantum cochlea (…)’ (19:30)

SARPESHKAR: ‘(…) because of that we can do synthetic biology, which is the top piece where chemistry goes into biology with molecular reaction circuits; we can also build computers to emulate cells (…).’ (20:55)

All this culminates in:

SARPESHKAR: ‘(…) you can also be inspired by biology: you can take an architecture in the biology to do something in computer science you would never have imagined before (…) so what I’m telling you is that the wet and the dry are very deeply connected; we have to learn to be amphibians (…) so my paradigm shift is actually a very, very simple one: we need to go ‘back’ to the future, collective analogue computers like nature does, in physics, chemistry, and biology, and not be so mesmerised by the ones and zeroes that we think are so great (…).’ (21:15-22:15)

This walks hand in hand with the views of the scientists in the field of artificial consciousness I have just quoted, and everything I have ever stated on the matter in this forum.

A problem may be posed by an overly materialistic perspective. The challenge is to combine a physicalist ontology with metaphysics: not simply emulating, but indeed creating life. My proposition is and has always been that Soong's ‘positronic brain’/Sarpeshkar's ‘quantum analogue computer’ indeed manages this.

So, in Star Trek terms, yes, Data is truly alive. The EMH, of course, is not. Sarpeshkar would surely agree.

[Rahul Sarpeshkar, "Analog Supercomputers: From Quantum Atom to Living Body". By courtesy of Quincy].
Set Bookmark
Andy's Friend
Fri, Apr 3, 2020, 3:55am (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

Yes, I remember those days, chatting with you and a few other regulars. Better days, with better discussions inspired by better series. I hope you are well.

Thanks for the ‘synopsis’ of those episodes. So, Data’s memory engrams can be recreated from a single neuron of his, can they? To quote Lycan:

“A neuron is just a simple little piece of insensate stuff that does nothing but let electrical current pass through it from one point in space to another; by merely stuffing an empty brainpan with neurons, you couldn’t produce qualia-immediate phenomenal feels!” (“Form, function, and feel”. The Journal of Philosophy, 78 (1981))

Lycan may be slightly outdated. Still, I’m truly happy I never watched this.



“If the discussion regarding artificial intelligence were nothing more than a dispute over the ways in which language is or might be used, it would not be very interesting, since it would refer to nothing more than the way the word “intelligence” might be commonly employed. If, instead, we are interested in knowing whether or not computers actually think, or clocks really tell time, and mean that they have the kind of consciousness, inferential powers, imagination, sensitivity, responsibility, memory, and expectations that humans have, we must turn away from linguistic usage to ask whether it will ever be possible for machines, no matter how quick and adroit, to be conscious, to infer, imagine, be responsible, and so forth.”

—Paul Weiss, “On the Impossibility of Artificial intelligence”. Review of Metaphysics, Vol. 44, No. 2 (1990), first page. (Presented at the 8th International Congress of Cybernetics and Systems, New York, 1990).


“I believe that if we are ever to also achieve true *artificial consciousness* ― what I gather we mean here by “sentience” ― we need also to create an artificial brain. As Haikonen wrote a decade ago:

‘The brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers. (…).’ ”

—Andy’s Friend, ‘Heroes and Demons’, here on Jammer’s, Oct 31, 2014. Haikonen was speaking of modern digital computers.


“This divide, of intelligence vs consciousness, is extremely important. Today, we have researchers in artificial intelligence, and we have researchers in artificial consciousness. The divide promises―if it hasn’t already―to become as great as that between archaeologists and historians, or anthropologists and psychologists: slightly related fields, and yet, fundamentally different. The problem is, that most people aren't aware of this. Most people, unknowingly, are still in 1988. They conflate the terms, and still speak of irrelevant AI (see this thread!). They still, unknowingly, speak of Deep Thought only.”

—Andy’s Friend, ‘The Measure of a Man’, here on Jammer’s, Jun 27, 2016.


(…) one of the main objectives of AI is to design a system that can be considered as a “machine with minds” in the full and literal sense. Further, it is obvious that if an entity consists of the mind in true sense then it must inevitably pose the attributes of consciousness. Indeed, the domain of AI reflects substantial interest towards consciousness. (…) The term “intelligence” is closely related to “consciousness” and in the last ten years there has been a growing interest towards the field of Artificial Consciousness (AC). Several researchers from traditional AI addressed the hypothesis of designing and implementing models for AC. It is sometimes referred to as machine consciousness or synthetic consciousness. (…) Indeed, the goal of AI is to enable the artificial agent to display the characteristics of mental properties or exhibit characteristic aspects of systems that have such properties. It is obvious that intelligence is not the only characteristic of mental property. (…) mental property also encompasses many other characteristics, e.g., action, creativity, perception, emotion and consciousness. The term “consciousness” has persistently been a matter of great interest at the philosophical level of human being but it is not formidably addressed within the purview of AI. (…).”

2.2 AC
(…) Generally, researchers consider three strands pertaining to AC. They are interactive empiricism, synthetic phenomenology, and ontologically conservative hetero-phenomenology. At first glance it seems easy to distinguish the AI and AC. In general, AI endeavours to create an intelligent machine whereas AC attempts to create machines that are conscious. However, the subject matter of consciousness and intelligence is quite complicated and distinction between these two aspects requires philosophical foundation.
(…) ‘‘Most roboticists are more than happy to leave these debates on consciousness to those with more philosophical leanings’’. Contrary to this, many researchers give sound consideration on the possibility that human beings’ consciousness is more than the epiphenomenal by-product. These researchers have hypothesized that consciousness may be the expression of some fundamental architectural principle exploited by our brain. (…)

Body, mind, intelligence and consciousness are mutually interrelated entities. However, consciousness is subtler than intelligence, mind, senses and body. AC is mainly concerned with the consciousness possessed by an artificial agent (…). AC attempts to explain different phenomena pertaining to it, including limitations of consciousness. There are two sub-domains of AC. They are the “weak AC” and “strong AC”. It is difficult to categorize these two subdomains due to the fact that they are not related with the dichotomy of true conscious agent and “seems to be” conscious agents. Further, researchers have given few computational models of consciousness. However, it is not possible to replicate the consciousness by computations, algorithms, processing and functions of AI method. In fact, however vehemently we say that the computer is conscious, it is ridiculous to imbibe that sensor data can create consciousness in a true sense. Indeed, consciousness is not a substance and is independent of sense object contact and cannot be produced by the element. (…) Furthermore, consciousness cannot depend on what function a machine computes. (…)”

—Subhash Pandey, “Can Artificially Intelligent Agents Really be Conscious?”. Sādhanā (2018), first and last page.

Last year, 2019:

One of the most painful issues of creating Artificial Intelligence (AI) is the problem of creating a hardware or software analogue of the phenomenal consciousness and/or a system of global access to cognitive information (…).
Wherein, presumable consciousness of so-called “strong” Artificial Intelligence is often regarded as a kind of analogue of human consciousness, albeit more quantitatively developed. In this case, artificial intelligence has a wider “phenomenal field”, has richer content (qualae) and a much larger amount of RAM (necessary for the reconstruction of conscious experience), etc.

The “spotlight” of a conscious mind does not always work in the mode of voluntary attention. Certain processes independently “breakthrough” into consciousness without permission. They penetrate the global access space as if “demanding” our conscious attention. Most often, these are emotional-volitional impulses, intuitive insights and the like. Desires, emotions, and complicated cognitive phenomena come as if “from the outside” without arbitrary participation of the actor. (…)
It seems that despite our common sense and familiar intuition, some aspects of our mental life are evolutionarily “programmed”. Therefore, for example, we have motivation and emotions, regardless of choice. We do not consciously choose our own desires or preferences. Needs and affects are given to us “as is”, in finished form. This, of course, does not prevent from making reflecting about them a posteriori (for example, in rationalization) or to influence them through awareness (in psychotherapy). The very intentionality of consciousness (or at least the potential possibility of intentionality) is predetermined.
(…) To ensure our smooth functioning in both the physical and the social world, nature has dictated that many processes of perception, memory, attention, learning, and judgment are delegated to brain structures outside conscious awareness” (…) Now we understand that human memory management, automatic motion control, affective-volitional functions, attention management, mechanisms of associative thinking, mechanisms for forming judgments and logical consequences, operations with the sensory flow, creating a complete picture of the world, and the like are primarily unconscious.
Thus, a significant part of our activity consists of mental facts that are transcendent in relation to consciousness. This feature is evolutionary due. However, hypothetical Artificial Intelligence can be free of the “dictate of the unconscious”, unlike human beings. The machine can have total global access to any “internal” processes. Thus, all information processes can be simultaneously “illuminated” (or accessible, as far as the hardware substrate allows), completely depriving the AI of the unconscious.

This leads to paradoxical conclusions. Awareness and self-awareness do not automatically lead to the emergence of motivation, desires or emotions. A conscious machine can be completely devoid of these processes, natural to humans. The intentionality of consciousness of Homo sapiens is due to evolution and is not obligatory for the machine.
There is a good reason to believe that the field of unconscious processes (within human psyche) is much larger than the field of phenomenal consciousness. (…) scientists have developed a hypothesis according to which even conscious and free will actions are nothing but fixation of unconscious processes a posteriori. This raises the difficult question: is the field of the unconscious nothing but the absolute basis for conscious processes? Is consciousness only an emergent feature of the unconscious (that is, a second-level process after neurophysiological processes)?
Thus, we come to the “traditional” division into “strong” and “weak” Artificial Intelligence. According to modern theoretical concepts, “strong” Artificial Intelligence should have at least several distinctive characteristics, among which the most essential is an intelligent agent’s behavior from the “first person” perspective. Theoretically, this should be a “goal setting machine”. In this case, “strong” human-like AI is impossible without the synchronous work of the conscious and unconscious “minds”.
When we argue about the human psyche, many of these questions have moved into the plane of the philosophy of consciousness or pure neuroscience. In the philosophy of consciousness, we are primarily interested in the ontological status of mental phenomena. Therefore, it is important for us to know whether the psyche is “something” or it is an “illusion” of the brain; whether there is an intentional agent or whether it is also an illusion. That is why it is also important for a person to determine what the ratio of conscious life to unconscious processes “in darkness” is.

In essence, the “weak” Artificial Intelligence is a kind of functional neural networks of various types (convolutional, spiking, deep stacking, etc.). They are the systems with multiple inputs, analytical subsystems, and one or n-number of outputs. Their widely known applying is pattern or speech recognition (what is called “machine perception”).
Here we can use the neural-network metaphor of Alan Turing’s “probabilistic machine”, which evaluates information based on big data. For example, I recognize a face in dynamics, because I have a huge amount of incoming data that is interpreted in the same way as it happens in modern neural networks. In the end, I have a certain result. Based on big data, it is already possible to build predictive models, etc. However, for such a machine, an external interpreter is still needed. For the time being, he plays the role of an “external consciousness” for the “unconscious” neural networks.
(…) All of the above features of the natural unconscious, such as automaticity, inaccessibility and uncontrollability, can be fully accessible to Artificial Intelligence systems. Moreover, here there are several development scenarios of the machine “psyche.”
1. A machine can arbitrarily form its conscious affective-volitional functions. In this case, a paradox arises: what exactly will induce the AI to choose motives and emotions? After all, the “second level unconscious” for the machine does not exist. (…)
2. The unconscious of Artificial Intelligence may also develop evolutionarily. For example, modern evolutionary algorithms allow the machine to learn how to “walk” independently without the rules of walking prepared in advance. By analogy, nothing prevents the possibility of evolution of both the higher mental functions of Artificial Intelligence and its unconscious automatic processes. However, there is a danger that such an AI can develop in a completely unpredictable direction. This will lead us later to scenario 5.
3. The unconscious AI may also be deliberately programmed. Thus, installation of the criteria for possible aesthetic, ethical and volitional prerequisites for the activities of the machine will be determined by its creators. In fact, this can become a psychic “insuperable force” for a conscious AI, transcendental to its “phenomenal field.” Therefore, the very intentionality of the consciousness of the machine will have to be artificially created.
4. The consciousness of AI can be a program analogue of human consciousness. Probably, in the future, the disclosure of the mechanisms of formation of consciousness and cognitions may lead to the creation of their exact program model, including the model of the unconscious. In such a case, Artificial Intelligence essentially becomes a perfect copy of a human person. At the same time the problem of qualae, of course, does not go anywhere. Nevertheless, technically we can “remove it from the equation” as irrelevant in a practical sense [NOTE: THIS IS WHAT SOONG ATTEMPTED WITH DATA’S PROGRAMMING, INCLUDING HIS ‘POSITRONIC BRAIN’ AS A PHYSICAL COGNITIVE ARCHITECHTURE FOR FURTHER GROWTH OR ‘MECHANISMS OF FORMATION’].
5. It may also happen that the consciousness of Artificial Intelligence as a kind of analogue of human consciousness is impossible in principle. Perhaps such phenomena as “consciousness” and “unconscious” will be absolutely inapplicable to AI. In this case, the machine “phenomena” (or lack thereof) will be absolutely incomprehensible to humans, and communication between man and machine will be questionable. (…)

Probably, a machine (as we saw above) will be able to effectively imitate natural behavior, for example, to conduct a fully meaningful conversation. However, will this mean that Artificial Intelligence will have a phenomenal experience, or at least something remotely resembling it? In addition, is there a fundamental difference between the imitation of rational behavior and the rational behavior itself? This raises an interesting question. If the machine says that it has qualae, that it feels something, that it is conscious, etc., then can we doubt it? Will Artificial Intelligence be a “philosophical zombie” according to Chalmers? What if this AI does not have a phenomenal consciousness that we call “the inner world”? However, if at the same time this particular AI will fully pass all versions of the Turing test and we will not be able to distinguish the conversation with it and with a reasonable person? Will we consider such an AI reasonable?
Let us try to look for answers from the other side. It is worth noting that such examples rather indicate that at this stage we are slowly creating an analog of the unconscious for Artificial Intelligence. BASED ON EXISTING TRENDS IN THE DEVELOPMENT OF AI, IT CAN BE NOTED THAT WE ARE MOVING ALONG THE PATH OF “QUANTITY TO QUALITY” [emphasis added]: i.e. improving the systems of “weak” AI (neural networks) and their further integration INTO THE META-SYSTEM OF NEURAL NETWORKS INTEGRATED LIKE HUMAN CONSCIOUSNESS [emphasis added]. For example, according to the theory of Jerry Alan Fodor, the whole human psyche (both conscious and unconscious) operates on the basis of the so-called “modules” (“modular mind” theory) [Fodor, 1983]. IF IN THE FUTURE WE CREATE SUCH A NEURAL NETWORK CONFIGURATION THAT WILL AT LEAST MIMIC “SYNCHRONOUS OSCILLATION OF GROUPS OF NEURONS”, OR SOME OTHER SYSTEM THAT COMBINES INDIVIDUAL NEURAL NETWORKS THAT REPRESENT SCATTERED FUNCTIONAL “MODULES” INTO A HIGHER-LEVEL NEURAL NETWORK, THEN PERHAPS WE WILL GET “STRONG” ARTIFICIAL INTELLIGENCE [emphasis added. NOTE: IN OTHER WORDS, AN ‘ARTIFICIAL BRAIN’ LEADING TO ARTIFICIAL CONSCIOUSNESS: A ‘POSITRONIC BRAIN’ LEADING TO DATA]. Therefore, it seems that the development of AI proceeds simultaneously under scenarios 2, 4 and 5.

— Eugene Piletsky, “Consciousness and Unconsciousness of Artificial Intelligence”. Future Human Image, Vol. 11, 2019.

I hope these few examples clarify the significance of cognitive architecture. I find Pandey’s contribution for the Indian Academy of Sciences particularly interesting. As some will recall I have lived and worked in India; and in Chapter 4.1, which I have omitted here, Pandey explores the question of consciousness based not on Plato or Aristotle or later Western philosophers, but on classic Indian philosophy: the Upanishads, the Vedanta, and so forth. This explains his definition of 'ontologically conservative hetero-phenomenology', a nomenclature that is nothing but a euphemism for biological chauvinism, which Pandey himself is dangerously close to, based on said classic Indian philosophy. There are other schools of thought than ours, and it is always good to be reminded of that lest we become too convinced of our own moral superiority in the West or the Federation.

I hope Piletsky's remarks on the necessity of the unconscious for consciousness isn't lost on readers.

Leading scientists in the fields of AI and AC diverge. The former, the ‘roboticists’ necessarily care for software. As Pandey puts it elsewhere, “The main task of AI is to discover the optimum computational models to solve a given problem”, and this necessarily involves the programming also. The latter hardly speak of software, for software may accomplish the most basic only: it processes, it does not think. If we wish to go farther and speak not of computations, but of thoughts and emotions—if we wish to ask questions such as ‘Does the robot *think*?’ or ‘Does the android *dream*?’—it’s the hardware that matters.

In Star Trek terms, this means that our good doctor on the Voyager, the EMH does not possess true consciousness. He (or more properly, it) is but a program: he mimics, or emulates, if perfectly, human behaviour only. Whereas Data is an artificial lifeform, endowed with neural networks that can emulate, or recreate, if imperfectly, genuine thought processes. He possesses artificial consciousness. He is truly alive.

I have fortunately all but forgotten ‘Nemesis’, and I have never watched ‘Picard’, so I can’t talk about the ‘synths’.
Set Bookmark
Fri, Apr 3, 2020, 2:05am (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 1

How the writing of this leaves me annoyed. Everybody keeps jumping to the right conclusions with minimal information.
Set Bookmark
Fri, Apr 3, 2020, 1:22am (UTC -5)
Re: DSC S2: Such Sweet Sorrow, Part 2

PM thanks for this!!!
Seen all the Short Treks and loved reading your brief summary and reviews for reminder!
Set Bookmark
John Daniels
Thu, Apr 2, 2020, 10:45pm (UTC -5)
Re: ENT S4: United

Good episode besides Shran girlfriend dying of a superficial wound, how dumb can you get, the best doctor in the universe and all of sudden she dies, it just seems so unrealistic that you get pulled out of the story.
Set Bookmark
John Harmon
Thu, Apr 2, 2020, 6:33pm (UTC -5)
Re: BSG S4: Daybreak, Part 2

I wonder if Hot Dog was ok with abandoning technology after being told his son has renal failure a few episodes prior.
Set Bookmark
Thu, Apr 2, 2020, 6:15pm (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

This may look like a tangent, but it's relevant to not only the current conversation, but to a prior conversation about what Soji and other Cylon style androids are and how they relate to human beings. As I was trying to say before (with the frog cell robot), it's the principle on which a computer (cell) or the basic components of a computer (cell) is designed.

This guy, Rahul Sarpeshkar, states it far better than I ever could. If anyone is interested, this right here is one of the best Ted talks I've heard in awhile. Titled, "Analog Supercomputers: From Quantum Atom to Living Body," it's only 22 minutes of your life. I seriously doubt you'll want them back:
Set Bookmark
Thu, Apr 2, 2020, 6:09pm (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

@ Jammer

"There's no weight or dimension to starships anymore. They have unfortunately become video game avatars that look like they were cloned with copy and paste."

This. One million times this. :(
Set Bookmark
Thu, Apr 2, 2020, 5:24pm (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

@Jason R.

That's just not true. There are many different types of computers. 0s and 1s are not even the tip of the iceberg. Assuming (and that's a big assumption) the human body requires analog information, computers are quite capable of producing the needed information. Analog computers are an actual thing. You're using the term, "data stream," as a pejorative. Just what do you think the cones and rods of your retinas are giving you RIGHT THIS MINUTE, but a data stream? Your inner ears are giving you data streams. Your nerve endings in your skin are giving you data streams. That's all it is.
Set Bookmark
Jason R.
Thu, Apr 2, 2020, 4:45pm (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

"We can't simulate it exactly, but we can give a pretty damn good rendition of it. What exactly can you experience with your 5 senses that you believe can't be simulated?"

What of one's senses can be simulated? In the end a computer can only process things as digital information, 0s and 1s. Is a data stream meant to approximate a sight or a touch equivalent to actual sight and touch? Or are we back to the dog photo problem?
Set Bookmark
Thu, Apr 2, 2020, 4:42pm (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

This isn't really Picard-related, but has anyone watched any of the Youtube channel Movies with Mikey? He's now done two episodes exploring the making of Star Trek. His videos are always great and refreshingly positive, and these two (entirely about the original series and its movies) may help put some things in perspective. Mainly, the idea that classic Star Trek as we know it was largely made in a ridiculously haphazard fashion where no one really knew what they were doing, and it was in many ways a genuine miracle that it managed to succeed at all, to the point where we can complain about writers here on Picard today.
Set Bookmark
Thu, Apr 2, 2020, 4:18pm (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

@Jason R.

We can't simulate it exactly, but we can give a pretty damn good rendition of it. What exactly can you experience with your 5 senses that you believe can't be simulated?
Set Bookmark
Jason R.
Thu, Apr 2, 2020, 3:00pm (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

Quincy we can't simulate a physical environment anymore than we can simulate an apple by photographing it.
Set Bookmark
Thu, Apr 2, 2020, 2:43pm (UTC -5)
Re: PIC S1: Et in Arcadia Ego, Part 2

@Jason R.
"Trying to create a general intelligence, artificial or otherwise, absent a physical body, is akin to trying to teach someone to walk as a pure intellectual exercise, only multiplied a million fold in difficulty."

"Well anything from a Google search engine to good old Dr. Sbaitso can mimic intelligence without being intelligent. Conceivably, you could even develop an algorithm so sophisticated that it could carry on a natural seeming conversation flawlessly. And yet it would only be an algorithm not a conscious being. And if you somehow gave this algorithm command of a physical body, it wouldn't know how to walk *at all* even if it could explain the process in exacting manner. Because knowing *about* walking and knowing how to walk are distinctive things. "

Interesting pov, but most likely untrue. Let's assume for the sake of argument that a physical body learning to interact in an environment is the definitive method of producing intellect. Modern technology allows us currently to simulate both the physical body and the environment far better than we can simulate a human brain. So the objection you raised is actually the least of A.I. researcher's concerns.

If you could create an A.I. with the potential to acquire sapience and all it lacked was a body to interact in an environment to learn from, we could achieve that right now with pure simulated virtual reality, let alone what's achievable with the tech in PIC where they have holodecks capable of fooling human senses. We don't actually need a body to build a functioning brain or an environment; we just need a Matrix to download our brand new brain into.

Your argument also fails to take into account the nature of the environment and body that you claim are required for intelligence. In order to successfully make the claim you're making you'd have to know precisely the level of capability, complexity, detail, etc in both the body and the environment that is sufficient to generate intelligence. In other words, it may indeed turn out that an environment as simple as a billiard ball table would be all the environment required and a simple mobile toy to interact with the billiards all the body necessary to achieve the goal. The body can be something very simple, like something no more complex than an inchworm or a mollusk with a foot. Any of these things would be easily simulated.

They've actually already simulated the brains of simple creatures. The bodies would be child's play. And they're already teaching robots to walk. They could easily do so completely inside a simulated environment, no actual body or tangible environment needed.
Next ►Page 1 of 2,838
▲Top of Page | Menu | Copyright © 1994-2020 Jamahl Epsicokhan. All rights reserved. Unauthorized duplication or distribution of any content is prohibited. This site is an independent publication and is not affiliated with or authorized by any entity or company referenced herein. See site policies.