[quote=""Koyaanisqatsi""]Well, then who/what would be interpreting the conclusions of the algorithms? Interpretation is a judgement; an act of agency iow. You could set the interpretation machine ("IM") to a probability threshold, I suppose, such that when it calculated say a 70% or more chance of X it chooses Y, but only as a function of imbuing the bias/prejudices of the programmer. Otherwise, what is it calculating against?[/quote]
The 'what' doing the interpretation (and acting on it) would be the machine, and yes it would be a rational agent. A thermostat is arguably a minimally rational agent.
Another more sophisticated example might be an autonomous (self-driving) car, which, on the sales literature at least, offers advanced control systems which interpret sensory information and allows the car to navigate its environment without human input.
[quote=""Koyaanisqatsi""]Well, ALL people are susceptible to advertising. That's why corporations spend billions every year on it. The question marketers want to answer is whether or not their particular strategies result in higher sales, but we needn't get into all of that.[/quote]
No, we needn't. But if you'e right, and such marketing techniques work (I'm sure there's a debate to be had on that regarding extent) then that would be a partial endorsement of itself.
Of course, as a forensic psychologist said to me at a barbecue a few days ago, it has always, arguably, been easier to predict what 1000 people will do than what 1 person will do, and I recognise the difference.
My point was to sketch out a case for a scenario in principle, a lot of it as yet unrealised (though still evolving) where conscious human (possibly chimeric) narrative cognition may not be the only game in town. I'm not saying that machines are now or perhaps ever will be as sophisticated as the successful kludge which is the human brain. I think that would be foolish and too speculative. That won't stop them being used more and more.
As such, I accept all your objections. I might only say that the sophistication of machines is on an upward trajectory (and there are some, apparently, who argue that human functioning is on a downward one, due to a supposed reduction in selection pressures).
I agree that when it comes to the brain, there is currently no analogy or metaphor which fits (I also, incidentally take the same view when discussing abortion, for which it seems to me all the analogies fall well short) and nothing as unique, that we know of. AI and robotics seem a long way off, even if improving. But it is very, very early days. And bear in mind that an alternative would not necessarily have to be 'like the human brain' in order to be effective. Comparisons with the human brain could be seen as anthropocentric in essence and as such a lack of metaphor may not be a fully relevant consideration.
[quote=""Koyaanisqatsi""]Again, I think they'd just be wildly guessing; the equivalent of throwing a dart at a dartboard.[/quote]
I reckon machines will, indeed already can decide as well as or better than humans, in certain areas/environments, and not just playing chess. Driving on roads may soon be one of them. Diagnosing and treating medical conditions may be another. I don't know what the future holds (I don't even understand what stuff like recursive algorithms, bioengineering, evolutionary computation and bootstrapping applications are). But some are far more bullish than me:
"Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone."
"Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks"
https://en.wikipedia.org/wiki/Superinte ... telligence
Note that these are not alternative methods of operation FOR humans, just ones that affect them*. Rival games that are 'out there' (on the pitch), if you like. When it comes to humans themselves, there seems to be a lot of truth in saying that the way we operate is the only game in town, for us, even if it is a kludge, chock-full of illusions and cognitive biases, many non-conscious.
Though I reserve the opinion that the game can gradually, perhaps subtly develop if for example the size and shape of the pitch alters (such as when we are confronted with evidence, perhaps obtained more objectively than we can manage, that something is an illusion). This can force us to at least reassess the nature of the game, which may in turn at least slightly affect how we - our systems - play it.
* at least not until some sort of hybrid/biomachine technology directly, internally interacts with our neurology.