Causality and consciousness

Discuss philosophical concepts and moral issues.
Post Reply
User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Causality and consciousness

Post by ruby sparks » Thu Jun 08, 2017 10:24 am

Sorry. I know another interesting discussion has been started in this forum and I'm not trying to limit that.

This question, of what the (causal) role of conscious thoughts is, particularly in sentient beings such as ourselves, bugs me. As far as I am aware, nobody knows for sure. At one extreme seems to lie the view that they are merely self-reporting after the fact, effectively having no or very little causal role. At the other extreme seems to be the view that they are strongly causal (as in, say, folk psychology notions of rational, free, and by implication conscious agency).

If I were to tend in one direction, it might be away from the latter view, and I might be inclined to see a lot of illusions and delusions. I doubt I would go all the way to the first view. It seems unlikely that conscious thinking has NO causal effect, even if that were only a 'byproduct', by which I mean after the fact in any specific situation but tending to indirectly influence future ones, because the 'thoughts have happened in the system' the first time around.

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Tue Jun 13, 2017 12:07 pm

So one specific question I don't know the answer to is to do with predicting. It seems that this is one of our more impressive capacities, involving as it seems to do, 'time travelling', out of the present moment (forwards, we seem to do it backwards too) and I think some have even said that our brains are more like prediction machines than reaction machines, with all the attendant confirmation biases that that may entail.

But my question is, do these predictions have to go through consciousness, or are lots of them (or indeed all of them) being informationally-processed non-consciously?

User avatar
Jobar
Posts: 26251
Joined: Mon Feb 23, 2009 6:42 pm
Location: Georgia

Post by Jobar » Tue Jun 13, 2017 1:42 pm

I've approached similar problems examining First Cause arguments for the existence of God(s). What causes consciousness? How do we prevent an infinite regress?

Koyaanisqatsi
Posts: 8403
Joined: Fri Feb 19, 2010 5:23 pm

Post by Koyaanisqatsi » Tue Jun 13, 2017 5:25 pm

[quote=""Jobar""]I've approached similar problems examining First Cause arguments for the existence of God(s). What causes consciousness? [/quote]

Process.
How do we prevent an infinite regress?
A projector has to be turned on. In our case the "on" switch--however fundamental and chronologically long it may take for the projector to come up to speed--is a chemical one (i.e., insemination).

We know this is the case because when the "projector" is damaged in certain ways, the "picture" (the self) is altered and when the "projector" is destroyed (death) the picture ends. Iow, the brain generates what we consider "consciousness" and when the brain dies, the process simply ends, just like turning off a projector.
Stupidity is not intellen

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Tue Jun 13, 2017 10:53 pm

That's an interesting analogy.

I suppose there are two things the 'projector' needs in order to get up and running, to do its projector thing. The first one is, as you say, developmental (and we could think of long term processes such as evolution, as well as short term ones such as take place, as you say, after conception) but the other might (thereafter) be situational, from moment to moment, such as when certain processes 'reach a threshold' of some sort to enter consciousness (project pictures).

Either way.....the pictures do not affect the running of the projector.

Wouldn't it be rather amazing if our brains operated in such a way?
Last edited by ruby sparks on Tue Jun 13, 2017 11:08 pm, edited 8 times in total.

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Tue Jun 13, 2017 11:36 pm

I would need some pretty detailed evidence to accept that thoughts are not causal. Inasmuch as we think in language anyway.

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Wed Jun 14, 2017 12:56 am

[quote=""plebian""]I would need some pretty detailed evidence to accept that thoughts are not causal. Inasmuch as we think in language anyway.[/quote]

Ok, but (and bearing in mind that I am not setting out to convince you that you are wrong) I could say, 'what's your detailed evidence that they are causal?'

'It feels and seems like they are' might not be that persuasive, to me. :)

Note that we are both asking for evidence, not argument. There are arguments for and against, and I'm not sure anyone has won them. I also don't think the evidence is conclusive either way.

Koyaanisqatsi
Posts: 8403
Joined: Fri Feb 19, 2010 5:23 pm

Post by Koyaanisqatsi » Wed Jun 14, 2017 2:50 am

[quote=""ruby sparks""]That's an interesting analogy.

I suppose there are two things the 'projector' needs in order to get up and running, to do its projector thing. The first one is, as you say, developmental (and we could think of long term processes such as evolution, as well as short term ones such as take place, as you say, after conception) but the other might (thereafter) be situational, from moment to moment, such as when certain processes 'reach a threshold' of some sort to enter consciousness (project pictures).

Either way.....the pictures do not affect the running of the projector.[/quote]

Well, that's where the analogy falls short, of course, but then, it's only an analogy and therefore not meant to be an exhaustively complete comparison. In our case, the brain is the projector--though "animator" would be more apt--and the "movie" it creates is ongoing, interractive and feedsback into the mix, so that the maps--and the "characters" superimposed into those maps--do indeed affect the projector.

Think of an animation that is so dynamic and complex it actually does inform the animator of how to animate it better.
Wouldn't it be rather amazing if our brains operated in such a way?
I believe they do. There is no actual Cartesian theater, of course, but there is a virtual one; a symbolic one. We map the external and use that data combined with already stored data to constantly update trillions of dynamic animations every nanosecond in much the same way that photons illuminate your bedroom when you turn on a light. Trillions of tiny particles instantly exploding and bombarding off of every crevice of every bit of furniture and the walls and the ceiling, etc, etc., etc. and then in every instant thereafter until you turn the light back off; a steady stream; an ocean of tiny light particles drowning your bedroom (including you) all narrowly perceived as they likewise bombard--upside down--the backs of your retinas.

Imagine if you had a camera fast enough to process at the speed of light to watch in ultra slow motion what happens when you turn on the light in your bedroom. The process would immediately become apparent and rather mundane, but because it happens at such an incredible rate of speed and involving unimaginable numbers of particles, we simply can't grasp it all:

(Not loaded: yXHWJ4iUlZs)
(View video on YouTube)
Last edited by Koyaanisqatsi on Wed Jun 14, 2017 3:35 am, edited 2 times in total.
Stupidity is not intellen

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Wed Jun 14, 2017 9:41 am

[quote=""Koyaanisqatsi""]
ruby sparks;673363 wrote:That's an interesting analogy.

I suppose there are two things the 'projector' needs in order to get up and running, to do its projector thing. The first one is, as you say, developmental (and we could think of long term processes such as evolution, as well as short term ones such as take place, as you say, after conception) but the other might (thereafter) be situational, from moment to moment, such as when certain processes 'reach a threshold' of some sort to enter consciousness (project pictures).

Either way.....the pictures do not affect the running of the projector.
Well, that's where the analogy falls short, of course, but then, it's only an analogy and therefore not meant to be an exhaustively complete comparison. In our case, the brain is the projector--though "animator" would be more apt--and the "movie" it creates is ongoing, interractive and feedsback into the mix, so that the maps--and the "characters" superimposed into those maps--do indeed affect the projector.

Think of an animation that is so dynamic and complex it actually does inform the animator of how to animate it better.
Wouldn't it be rather amazing if our brains operated in such a way?
I believe they do. There is no actual Cartesian theater, of course, but there is a virtual one; a symbolic one. We map the external and use that data combined with already stored data to constantly update trillions of dynamic animations every nanosecond in much the same way that photons illuminate your bedroom when you turn on a light. Trillions of tiny particles instantly exploding and bombarding off of every crevice of every bit of furniture and the walls and the ceiling, etc, etc., etc. and then in every instant thereafter until you turn the light back off; a steady stream; an ocean of tiny light particles drowning your bedroom (including you) all narrowly perceived as they likewise bombard--upside down--the backs of your retinas.

Imagine if you had a camera fast enough to process at the speed of light to watch in ultra slow motion what happens when you turn on the light in your bedroom. The process would immediately become apparent and rather mundane, but because it happens at such an incredible rate of speed and involving unimaginable numbers of particles, we simply can't grasp it all:

(Not loaded: yXHWJ4iUlZs)
(View video on YouTube)[/QUOTE]

Thanks for the video. At first, I thought I was meant to be paying attention to the slowed-down passage of light. It took me more than a minute to realise that the video was a clever piece of neuromarketing but by then it was too late, as I'd already paused it to go buy a particular brand of soft drink, as the clever corporate fiends paying for the video no doubt intended me to.

But seriously, yes, I wouldn't disagree with you inasmuch as your new analogy makes plausible sense. There is no theatre, and the conscious thoughts, when they occur, are part of the brain, and it would, I think, be surprising if they had no effect at all.

All I would say is that I don't think the previous analogy, where the pictures have no or virtually no effect on the running of the machine, is invalidated (even if I prefer the second one, instinctively). I think the former position is, roughly speaking, epiphenominalism, which appears to be out of favour, I think, among philosophers these days, albeit not refuted.

"Epiphenomenalism is the view that mental events are caused by physical events in the brain, but have no effects upon any physical events."
https://plato.stanford.edu/entries/epiphenomenalism/

In partial support of the former analogy (and by extension epiphenominalism) there are, of course, the notorious (and opponents would say flawed) Libet experiments and their many successors, in which it appears (not conclusively) as if conscious awareness of something appears after the facts (the 'facts' being the non-conscious processes which precede it).

I don't think we can yet rely on these experiments to confirm a very minor (if any) role for conscious awareness, but I do think they are nonetheless troubling/interesting, albeit they are measured in certain laboratory-type decision situations which may only be among the more simplistic and are not yet sophisticated enough to allow for only one possible interpretation.

Still, these experiments may be considered as at least somewhat objective evidence which goes beyond/under the subjective sensation of 'it feels like' I, for example, raise my right arm because I consciously intended to, which I have seen the philosopher John Searle using as a supposed demonstration of what people tend to think of as conscious causality, to me not very convincingly, because of the doubts raised by the aforementioned collection of neuro-experiments, of which Libet's are, by now, the most out of date.

Even if we were to run with the second option (at least some causality) there are still the questions of 'how, when and in what way causal'? and other questions, such as when does something cross a threshold into consciousness when so much of what our brains do seems to operate under the radar.

My own suspicion is that even if consciousness does have a causal role, it is nowhere near as strong as and may be different to what we tend to think.

I wonder if it would be interesting to consider one sort of 'pictures', the ones which seem to be generated when we are in dream sleep (accepting the caveat that this involves different brain states to waking life). This seems a bit more like 'us watching a film' passively than does waking consciousness. Most of us don't normally have a sense of control (though some apparently do, lucid dreamers) or as strong a sense of self (even though we normally have a particular '1st person-esque viewing position') as in waking life. As such, it seems a little bit easier to venture to say that the dream pictures don't have much of a causal effect.

They still could, of course, but on the other hand I think they could still just be epiphenomena, in which case, for example, we don't wake up in a cold sweat because the film gets unbearably scary, but because processes in our bodies and brains are automatically, electro-chemically, building towards a state of 'excitement' in any case and the film is, say, not quite instantaneously only reflecting something that is about to happen anyway and is not causing it, or is reporting it just after. We used to think that thoughts were instantaneous, before we learned that processing them takes time. If processing to consciousness is slower than processing non-consciously.....wouldn't all conscious thoughts be news reports?

Granted, even if that were the case, we all know that news reports can subsequently be causal, in certain ways (in a feedback-looping way for example), just not in the way we tend to think of our conscious thoughts being (immediately) causal, relevant to the 'live' events they report. I put 'live' in inverted commas because we are all familiar with the time lags involved when we watch something 'live' on tv.

This might not be a trivial distinction, if, at any given instant, a conscious thought is not in fact causing something, even if prior or looping conscious thoughts are having 'some sort of feedback effect' on the evolving automatic, non-conscious processes which are actually doing all the rolling work relevant to the decision/action at hand.
Last edited by ruby sparks on Wed Jun 14, 2017 11:11 am, edited 49 times in total.

Koyaanisqatsi
Posts: 8403
Joined: Fri Feb 19, 2010 5:23 pm

Post by Koyaanisqatsi » Wed Jun 14, 2017 12:24 pm

A lot to chew. I would say a wet dream is a very direct causal effect of a dream, but you are also forgetting that the causal effect of dreaming is so strong that we evolved powerful muscle-paralysis drugs to prevent (most of) us from sleepwalking and other forms of sleep-oriented activities.

As to all conscious thoughts being "news reports," isn't that exactly what is happening? The self--the part that we think of as "we"--is a generated/animated construct of the brain and as such it is only given so much information as is minimally required not to get the body killed in most instances, but the brain/body collects (and discards) trillions more bits of information than the conscious self will ever be aware of. Your optic system alone discards/degrades (however you wish to look at it, pun intended) tremendous amounts of information before sending the remaining information on to the higher cognitive processing centers. There's more here that you may find interesting (or as a spring board for research).

The point being that the body is basically one giant sensory input (output) device--starting with the "six" senses; the skin connected to the nerves, with the ears, eyes, nose and throat as primary, etc., etc., etc--that collects and processes "telemetary" date of the external world all circulating around but primarily processed in the now four brains: the reptilian; the mammalian; the homos sapiens sapiens (neocortex); and now, in some as yet to be fully determined fashion, the gut. Before getting to those "higher" functions, however, the body filters most of that information down to just certain amounts (estimates--see links above--vary as to how much).

Basically we're just walking information input devices with a complicated system of information processing devices all evolved to ensure that the next nano-second of every nano-second isn't our last nano-second. We're talking about uncountable quadrillions of bits of information (or whatever the astronomically large number of bits of information that constantly bombard and trigger our nearly equally large number of receptor cells may be at any given point in space-time); just one constant flow of information animated at both the "mind" level in a digital sense and the body level in a quantum sense; one unique energy state followed by another unique energy state followed by another unique energy state ad infinitum, for which "we" are only consciously generated for about 120 solar orbits upper limit.

We are essentially a walking talking shitting film crew that is constantly recording the external world, translating that information from analogue to digital in our brains and running a concurrent virtual editing bay of that movie with a latency of milliseconds between what was just shot, recorded, transferred to digital and edited for playfeedback before the next unique energy state is photographed.

And the user illusion is the fill-in-the-gaps effect of the animated self's unique status of being outside the system at the same time it is inside the system (i.e., superimposed onto the maps) and can thus have a "meta" unique feedback perspective of measuring the changes in states and reporting that back as additional bits of information for the brain to then incorporate in the next nano-seconds' updated map/track of the internal virtual maps of the external. Over and over and over, trillions of times every nano blah blah blah.

Overwhelmingly large, but still reducible to something like that video I posted of the slowed down photon stream (and no, that was not an example of neuromarketing).
Stupidity is not intellen

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Wed Jun 14, 2017 3:27 pm

[quote=""Koyaanisqatsi""]I would say a wet dream is a very direct causal effect of a dream..[/quote]

So might I. It certainly 'feels like it'. But on the other hand, as far as I know, nocturnal emissions are not always associated with REM sleep (though usually are), erections (in both men and women) happen sporadically throughout the night (again, as far as I know, not always during erotic REM dreams). I'm not even 100% sure if all emissions are associated with erections (though I'm guessing they usually are).

Even, say, if wet dreams were always associated with REM erotic dreams and erections, this would still be a chicken and egg problem, where the causes could be lots of things, automatically increased blood low during REM sleep (due to brain needing more oxygen to function in its more active state compared to non-REM sleep), one's todger rubbing against the sheets, or if one is really lucky, one's partner's hand falling across one's lap.......etc, etc. In that scenario, the erotic images may be caused (triggered by signals arriving in the brain from bodily/physical arousal) rather than themselves causal.

Of course, it would be easy to cite rapid interplay (no, not the stuff after foreplay and before afterplay) between the two things (feedback loops running over and over and over, trillions of times every nano blah blah blah) and that is likely to be happening, but I'm not sure that doesn't end up as handwaving in terms of being an explanation for my specific question, because it necessarily obfuscates and to an extent assumes the answer, 'yes' to the (to me annoying) question of whether consciousness can be causal. That's a question I doubt we'll answer here. :(

I probably come at this with a fondness for illusions, for explaining the real magic, as Dennett might say, exposing the bags of tricks, for spoiling the fun as some extreme pragmatists might say. Our conscious experience is almost certainly full of them. In that sense, we could say that a lot of consciousness is (i.e. involves) illusions, so......is the idea that thoughts are causal one of those? It's skepticism on my part, basically.

[quote=""Koyaanisqatsi""]... but you are also forgetting that the causal effect of dreaming is so strong that we evolved powerful muscle-paralysis drugs to prevent (most of) us from sleepwalking and other forms of sleep-oriented activities.[/quote]

Well, it's not a case of forgetting, it's more a case of not necessarily taking it that the paralysis is caused by the dreams. In fact, my guess would be that it's not. Without saying you're wrong (how would I be in a position to know that) it seems to me a bit more of a stretch even beyond what we have been talking about already, to say that the dream content is causing muscle-inhibiting neurotransmitters to deploy. The state of being in deep/REM sleep might still be associated with the paralysis however, and we are stuck at chicken and egg.

One other thing, regarding sleepwalking, apparently this occurs most frequently at a stage before REM sleep and does not necessarily/always involve any complicated or sometimes any dreaming (allowing for the limitations of self-reporting). As such, our folk psychology assumptions may be awry about that at least. Again, chicken and egg, even if there was mental content during sleepwalking, with no reliable means of concluding that any mental states are either causal, or merely caused.

https://www.sleepassociation.org/patien ... p-walking/

[quote=""Koyaanisqatsi""]As to all conscious thoughts being "news reports," isn't that exactly what is happening? The self--the part that we think of as "we"--is a generated/animated construct of the brain and as such it is only given so much information as is minimally required not to get the body killed in most instances, but the brain/body collects (and discards) trillions more bits of information than the conscious self will ever be aware of. Your optic system alone discards/degrades (however you wish to look at it, pun intended) tremendous amounts of information before sending the remaining information on to the higher cognitive processing centers. There's more here that you may find interesting (or as a spring board for research).

The point being that the body is basically one giant sensory input (output) device--starting with the "six" senses; the skin connected to the nerves, with the ears, eyes, nose and throat as primary, etc., etc., etc--that collects and processes "telemetary" date of the external world all circulating around but primarily processed in the now four brains: the reptilian; the mammalian; the homos sapiens sapiens (neocortex); and now, in some as yet to be fully determined fashion, the gut. Before getting to those "higher" functions, however, the body filters most of that information down to just certain amounts (estimates--see links above--vary as to how much).

Basically we're just walking information input devices with a complicated system of information processing devices all evolved to ensure that the next nano-second of every nano-second isn't our last nano-second. We're talking about uncountable quadrillions of bits of information (or whatever the astronomically large number of bits of information that constantly bombard and trigger our nearly equally large number of receptor cells may be at any given point in space-time); just one constant flow of information animated at both the "mind" level in a digital sense and the body level in a quantum sense; one unique energy state followed by another unique energy state followed by another unique energy state ad infinitum, for which "we" are only consciously generated for about 120 solar orbits upper limit.

We are essentially a walking talking shitting film crew that is constantly recording the external world, translating that information from analogue to digital in our brains and running a concurrent virtual editing bay of that movie with a latency of milliseconds between what was just shot, recorded, transferred to digital and edited for playfeedback before the next unique energy state is photographed.

And the user illusion is the fill-in-the-gaps effect of the animated self's unique status of being outside the system at the same time it is inside the system (i.e., superimposed onto the maps) and can thus have a "meta" unique feedback perspective of measuring the changes in states and reporting that back as additional bits of information for the brain to then incorporate in the next nano-seconds' updated map/track of the internal virtual maps of the external. Over and over and over, trillions of times every nano blah blah blah.
[/quote]

Thanks. Interesting. I can't think of anything there to disagree with. I'll read those links later. At a quick glance, one of them seems to discuss the idea that our brains do not so much react to incoming stimuli, as produce their own, internally, and in fact not so long ago some material was posted here at SC (by Sub, I think) to suggest that our brains are, effectively and primarily, 'ongoing prediction' machines, generating their own news footage beforehand and then constantly updating it by comparison with incoming data.

Again, how much of this predicting takes place in consciousness and how much is non-conscious information-processing, I don't know. Intuitively, it feels like it would need to be in consciousness in order to be properly processed, but does it? Is there time, after we have accidentally dropped the raw egg (definitely not the chicken in this case) onto the concrete floor, for our brains to form conscious awareness of 'egg smashing on floor predicted scenario', even if our brains do work this way (i.e. the way in which the smashing of the egg 'merely' confirms a prediction, or the way that the egg bouncing up again would force us to reconsider our assumptions and predictions)?
Last edited by ruby sparks on Wed Jun 14, 2017 4:28 pm, edited 33 times in total.

User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Wed Jun 14, 2017 5:22 pm

Just a note on the causal power of thoughts in language. Return to the Ramsey Stitch and Garon paper. One of the conclusions is that if you make a decision through logical reasoning, there is no lower level that could simulate that process. It might be supervenient upon it, but supervenience does not implly That the supervenient process is epiphenomenal.

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Wed Jun 14, 2017 9:07 pm

I don't remember the author, but I have heard that described as the systems theory or holistic 'remainder hypothesis' where the remainder is the identities left over once the elements of the system have been eliminated.

ETA: which paper are you referring to?

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Wed Jun 14, 2017 9:55 pm


User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Wed Jun 14, 2017 10:43 pm

[quote=""subsymbolic""]Just a note on the causal power of thoughts in language. Return to the Ramsey Stitch and Garon paper. One of the conclusions is that if you make a decision through logical reasoning, there is no lower level that could simulate that process. It might be supervenient upon it, but supervenience does not implly That the supervenient process is epiphenomenal.[/quote]

Did I cite the right paper to plebian? Having re-scanned it, I didn't find it easy to locate that particular point you are making. My impression, after reading it the first time, was that it was mainly concerned with connectionism, which seems a slightly different issue. As such I'm not sure I understand what you say.

Nevertheless, if I were to try to reply, I might (again) wonder if talk of levels is 'merely' an issue of description rather than ontology.

As to supervenience, I admit the varieties of ways this is thought of (weak, strong, global etc) sometimes confuses me. So what I now say may therefore be confused. Supervenience does not imply epiphenominal, supervenient properties may not be epiphenominal ones. Ok. So far so good.

But something or some process may have properties which are supervenient but not causal. So if I am hit on the head by a falling banana, the yellowness of the banana is not causal regarding the bump. Does that make any relevant sense? Lol.
Last edited by ruby sparks on Wed Jun 14, 2017 11:22 pm, edited 1 time in total.

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Wed Jun 14, 2017 11:00 pm

Ps

Having thought about it...I think I might better see koy's point about muscle inhibition during dreaming sleep (and I think it's fair to say that the two are correlated).

The suggestion is that the muscle paralysis happens (has evolved) to protect us from the (mental) dream content causing us to act it out. This I believe has been borne out in studies on cats where the relevant inhibitory nerves were cut resulting in the cats 'stalking imaginary prey'.

On the face of it, this seems a tricky thing for an epiphenomenalist to explain.

But perhaps the mental images are a bit like the yellowness of the banana, in that they are freeloading and causally irrelevant to a process which would happen anyway, for other, non-mental reasons (the same ones that are causing the mental images in the first place). In this sense, all the physical causes would constitute sufficiency.

Or to look at it another way, if we need conscious thoughts in order to cause us to act, where is the role for non-conscious causality?
Last edited by ruby sparks on Wed Jun 14, 2017 11:55 pm, edited 2 times in total.

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Wed Jun 14, 2017 11:21 pm

Pps

I am using distinctions such as mental and physical for convenience and without necessarily making any commitment to them being ontologically separate.

That's just in case I sound like a dualist of some sort. :)

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Wed Jun 14, 2017 11:28 pm

[quote=""ruby sparks""]
subsymbolic;673405 wrote:Just a note on the causal power of thoughts in language. Return to the Ramsey Stitch and Garon paper. One of the conclusions is that if you make a decision through logical reasoning, there is no lower level that could simulate that process. It might be supervenient upon it, but supervenience does not implly That the supervenient process is epiphenomenal.
Did I cite the right paper to plebian? Having re-scanned it, I didn't find it easy to locate that particular point you are making. My impression, after reading it the first time, was that it was mainly concerned with connectionism, which seems a slightly different issue. As such I'm not sure I understand what you say.

Nevertheless, if I were to try to reply, I might (again) wonder if talk of levels is 'merely' an issue of description rather than ontology.

As to supervenience, I admit the varieties of ways this is thought of (weak, strong, global etc) sometimes confuses me. So what I now say may therefore be confused. Supervenience does not imply epiphenominal, supervenient properties may not be epiphenominal ones. Ok. So far so good.

But something or some process may have properties which are supervenient but not causal. So if I am hit on the head by a falling banana, the yellowness of the banana is not causal. Does that make any relevant sense? Lol.[/QUOTE]
Sure. But qualia are a special kind of epiphenomenal activity and are not generally considered causal in and of themselves. I tend to think of qualia as experiential data. So, it's yellowness may not be causal but ideas about its yellowness might be. Basically, where does the if-then type of sequence come from in a totally eliminativist perspective? Let's say I tell you that green bananas are cursed by ancient gods and that you will fall into bad luck if you get hit on the head with one so you act differently​ when threatened with a green banana than a yellow 🍌. Where is the eliminated idea or belief in a physicalist framework?

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Thu Jun 15, 2017 9:03 am

[quote=""plebian""]Sure. But qualia are a special kind of epiphenomenal activity and are not generally considered causal in and of themselves. I tend to think of qualia as experiential data. So, it's yellowness may not be causal but ideas about its yellowness might be.[/quote]

Ok, but yellowness (if experienced as qualia, which I wasn't necessarily thinking of but let's not complicate things by asking if it has yellowness of itself, unobserved) can, it seems, go straight to emotional response without any intervening conscious ideation. Not that this rules out it resulting in conscious ideation also. I think I would have to say that if conscious thoughts are causal, so are qualia, though perhaps you would say that 'pure qualia' experiences are not 'thinking' and I would have to amend what I said to saying that conscious experiences are causal.


[quote=""plebian""]Basically, where does the if-then type of sequence come from in a totally eliminativist perspective? Let's say I tell you that green bananas are cursed by ancient gods and that you will fall into bad luck if you get hit on the head with one so you act differently​ when threatened with a green banana than a yellow ��. Where is the eliminated idea or belief in a physicalist framework?[/quote]

I can't easily counter that. I'm not sure why physicalist needs to be in there, because I can't easily counter it whether I adopt that stance or not*, the bottom line being that yes, it does strongly seem that beliefs are causal, whether they are consciously held in mind or whether they have been absorbed into the non-conscious processes (both, in other words, because I think the latter would have to be included).

After a bit of browsing, I see that Stephen Stich (part author of the article linked to above) has written a book entitled, "From Folk Psychology to Cognitive Science" which has the subtitle, "The Case Against Belief" though I can't find a free pdf copy and wouldn't have time right now (off to Manchester early tomorrow morning) to read a book anyway. From what partial snippets I have skimmed, I would say that my initial response is that it sounds like a very big ask to eliminate beliefs, be they causal or otherwise.

As such, consider me at this point swayed by your question in the direction of causality of beliefs.

I suspect sub will have much better-formed views on this than I, judging by stuff he has said in the past.

One thing I'm not yet clear on is where connectionism comes into this. I'm thinking that all connectionism is saying is that stuff (processes, brain states) is connectionist, not modular or discrete. I'm not sure why there could not be room in that for non-modular, non-discrete connectionist beliefs. But it seems I missed something in the connectionist paper, about language and semantics. Perhaps it has something to do with what wiki says here (when citing the Stich 'case against belief' book):

"It has also been argued against folk psychology that the intentionality of mental states like belief imply that they have semantic qualities. Specifically, their meaning is determined by the things that they are about in the external world. This makes it difficult to explain how they can play the causal roles that they are supposed to in cognitive processes.

And then immediately after, here (when citing the Ramsey, Stich & Garron 'connectionism' paper):

In recent years, this latter argument has been fortified by the theory of connectionism. Many connectionist models of the brain have been developed in which the processes of language learning and other forms of representation are highly distributed and parallel. This would tend to indicate that there is no need for such discrete and semantically endowed entities as beliefs and desires"

https://en.wikipedia.org/wiki/Eliminative_materialism




* ETA: What I mean is, I think I might be able to give a physicalist account. It's a non-causal account I might struggle with.
Last edited by ruby sparks on Thu Jun 15, 2017 10:15 am, edited 8 times in total.

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Thu Jun 15, 2017 9:38 am

Getting back to yellowness, or better still banana-ness, and thinking of it as an emergent property, then it would seem that it doesn't (perhaps arguably can't) have downward causality, back down into the stuff it emerged from, whether that's of the banana or of my head (going back to the banana landing on my head scenario) the same way that the car-ness of a car is not causing anything for the car*, or, less obviously perhaps the flocking pattern of a flock of flocking birds is not causing anything for the flock of birds (that is to say the emergent pattern itself is not causal).

But consciousness and/or conscious thinking do not seem to be the same sort of (abstract?) emergent properties as banana-ness and car-ness. Aren't the former so foundationally real that they form the basis of one of the most reliable of all philosophical statements about actual existence, 'I think therefore I am'?






* ETA: Although....can the car-ness of the car and the banana-ness of the banana be causal if it's perceived by, say, a human brain (and not as in the falling banana just physically hitting my head, unseen)? I think I'd have to say yes, mainly because I can't at this point find a good way to say no, even when I'm setting pragmatic issues and perspectives aside and trying to be realist, reductionist and/or eliminativist. It seems the car-ness, when consciously perceived/processed is actually causal. Which would reduce my OP question (and its implied case for illusion) to asking 'in what way causal' and 'in what way illusory' (ie which parts involve illusions) with the principle of some sort of causality intact, at this point.

At this point, I would tend to say that conscious thoughts/experiences are neither necessary or sufficient for causality in humans. I'd say that consciousness is just one route. Then we can still ask if the role of consciousness is overestimated in folk psychology. Which is perhaps a way of trying not to throw the baby out with the bathwater.
Last edited by ruby sparks on Thu Jun 15, 2017 10:14 am, edited 26 times in total.

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Thu Jun 15, 2017 6:23 pm

Yellowness isn't an emergent property though. It's an assessment of sensory input. Qualia are in a marginally different category from concepts.

User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Thu Jun 15, 2017 7:24 pm

The point from the RSG paper is about individuation. The 'symbol shuffling' account of beliefs and desires simply doesn't fit with the connectionist account of content spread superpositionally across a network. Hence if content is stored non conceptually across a brain FP is false. It is of course, but even then that wasn't much of a gamble. The was after Rumelhardt and Mclellan don't forget. Rosenblat was vindicated and Minsky was making excuses. The biology followed swiftly...

As for beliefs, my take is that they are quintessentially public states. They never actually connect directly to the qualia we judge. Priviledged access of private phenomenal states is just a matter of being in the right place to see them. The judgement one is in pain isn't a pain...

Unless you are Dennett's, because he's a linguistic behaviourist.

All of this is Wittgenstein, of course.

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Thu Jun 15, 2017 8:02 pm

[quote=""subsymbolic""]The point from the RSG paper is about individuation. The 'symbol shuffling' account of beliefs and desires simply doesn't fit with the connectionist account of content spread superpositionally across a network. Hence if content is stored non conceptually across a brain FP is false. It is of course, but even then that wasn't much of a gamble. The was after Rumelhardt and Mclellan don't forget. Rosenblat was vindicated and Minsky was making excuses. The biology followed swiftly...

As for beliefs, my take is that they are quintessentially public states. They never actually connect directly to the qualia we judge. Priviledged access of private phenomenal states is just a matter of being in the right place to see them. The judgement one is in pain isn't a pain...

Unless you are Dennett's, because he's a linguistic behaviourist.

All of this is Wittgenstein, of course.[/quote]
This seems to be missing the mark to me. Trying to find the symbols being shuffled seems like a wrong headed approach. Looking for pattern making processes seems a lot more obvious. For example, a few years ago, some neurologist or I forget who figured out that maybe mice have literal spatial analogs in some subsection of their brains. That location and mapping functions utilize a network that makes an actual map analog. In a pattern making process, there isn't a location for a particular pattern but there may be connected locations for particular types of patterns.

User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Thu Jun 15, 2017 10:15 pm

[quote=""plebian""]
subsymbolic;673459 wrote:The point from the RSG paper is about individuation. The 'symbol shuffling' account of beliefs and desires simply doesn't fit with the connectionist account of content spread superpositionally across a network. Hence if content is stored non conceptually across a brain FP is false. It is of course, but even then that wasn't much of a gamble. The was after Rumelhardt and Mclellan don't forget. Rosenblat was vindicated and Minsky was making excuses. The biology followed swiftly...

As for beliefs, my take is that they are quintessentially public states. They never actually connect directly to the qualia we judge. Priviledged access of private phenomenal states is just a matter of being in the right place to see them. The judgement one is in pain isn't a pain...

Unless you are Dennett's, because he's a linguistic behaviourist.

All of this is Wittgenstein, of course.
This seems to be missing the mark to me. Trying to find the symbols being shuffled seems like a wrong headed approach. Looking for pattern making processes seems a lot more obvious. For example, a few years ago, some neurologist or I forget who figured out that maybe mice have literal spatial analogs in some subsection of their brains. That location and mapping functions utilize a network that makes an actual map analog. In a pattern making process, there isn't a location for a particular pattern but there may be connected locations for particular types of patterns.[/QUOTE]

Of course, how else would you instantiate a serial process on a parallel processor but via a virtual machine. However, I'd love to know how they discovered that outside of parts of the brain that do vision. Across a visual sheath, sure, but outside of specific task handling regions that are firm or hard wired, I doubt they could solve the philosophical issues this decade let alone the scientific ones.

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Thu Jun 15, 2017 11:20 pm

A quick phone based Google search gave me this:
The hippocampus is known to have a central role in spatial learning and memory. Place cells in this structure form a spatial map of an animal’s environment (1). The main input to the hippocampus comes from the entorhinal cortex, an area that integrates cortical inputs before forwarding them into the hippocampal formation. The dorsal medial entorhinal cortex (MEC) contains spatially modulated cells (grid cells) that fire at the nodes of a hexagonal lattice as the animal traverses its environment (2). Beyond hippocampal place cells and MEC grid cells, there is also extensive evidence for another type of spatially modulated neurons that increase firing when the animal’s head points in a specific direction [head direction (HD) cells] (3). These cells reside in several subcortical and cortical regions, including the retrosplenial cortex (RSC). The RSC is one of the most important cortical afferents to MEC (4, 5). It is the most caudal part of the cingulate cortex, positioned between anterior cingulate, parietal/visual, and parahippocampal cortices. It is further subdivided into areas A29a–c (granular cortex) and area A30 (dysgranular cortex) (6). The RSC has been implicated in a number of cognitive tasks, including navigation based on external spatial (allothetic) information (7).
http://m.pnas.org/content/111/23/8661.full

Anyway, the point I was getting at is that the idea shouldn't be expected to have specific location other than the location to within a few feet.

Post Reply