Monday, August 10, 2020

Ecology of the Himalayas: WUaS News Q&A Livestream 8/10/20 10am PT: 1) Seeking matriculating undergrads for 9/1/20 on emerging WUaS Open edX PLATFORM via MOOCit * * * Agency? (robot free will?) "In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance" #PhilosophicalAgency ~ * With this definition in a way of agency: "In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance," isn't this robot in the video in the blog post picking yellow green peppers exhibiting autonomy or agency in locating and picking the peppers without human assistance?



WUaS News Q&A Livestream 8/10/20 10am PT: 1) Seeking matriculating undergrads for 9/1/20 on emerging WUaS Open edX PLATFORM via MOOCit https://scott-macleod.blogspot.com/2020/08/indian-rose-chestnut-world-univ-sch.html https://youtu.be/k3wlb4TlXtc 2) Email info@ worlduniversityandschool.org to participate 3) open MBM Minutes from 8/5/20 https://scott-macleod.blogspot.com/2020/08/legges-hawk-eagle-minutes-for-sat-jul.html ~

https://twitter.com/WorldUnivAndSch/status/1292647710210719744?s=20
https://twitter.com/scottmacleod/status/1292647975932407808?s=20
https://twitter.com/sgkmacleod/status/1292648314488238080?s=20
https://twitter.com/WUaSPress/status/1292648474136080384?s=20
https://twitter.com/HarbinBook/status/1292648762700005377?s=20
https://twitter.com/TheOpenBand/status/1292649183631962118?s=20




*
Hi Rohit and Universitians, 

In about 30 minutes, the WUaS Livestream - https://www.youtube.com/user/WorldUnivandSch/live (from https://www.youtube.com/user/WorldUnivandSch) via 8x8:

WUaS News Q&A Livestream 8/10/20 10am PT: 1) Seeking matriculating undergrads for 9/1/20 on emerging WUaS Open edX PLATFORM via MOOCit https://scott-macleod.blogspot.com/2020/08/indian-rose-chestnut-world-univ-sch.html https://youtu.be/k3wlb4TlXtc 2) Email info@ worlduniversityandschool.org to join video conversation, ask questions, share ideas 3) open MBM Minutes from 8/5/20 https://scott-macleod.blogspot.com/2020/08/legges-hawk-eagle-minutes-for-sat-jul.html ~

https://twitter.com/WorldUnivAndSch/status/1292647710210719744?s=20
https://twitter.com/scottmacleod/status/1292647975932407808?s=20
https://twitter.com/sgkmacleod/status/1292648314488238080?s=20
https://twitter.com/WUaSPress/status/1292648474136080384?s=20
https://twitter.com/HarbinBook/status/1292648762700005377?s=20
https://twitter.com/TheOpenBand/status/1292649183631962118?s=20

Again, World Univ & Sch is seeking to begin matriculating undergraduate students by Sept 1, 2020, which I think you have expressed interest in (or for prospective students you may know of). Please register by August 14, and by sending me an email. 

World Univ & Sch is seeking to offer online on the WUaS Open edX platform free-to-students' 4-year best STEM CC-4 OCW Bachelor degrees (licensing and accrediting) beginning around Sept 1, 2020 in ENGLISH. 

Please let me know too if you have further questions or thoughts in these regards. 

Regards, Scott 




*
World Univ & Sch News and Q & A 8/10/20 1) Seeking matriculating undergrads for 9/1/2020 MBM Minutes

https://youtu.be/qbggnTOjlQA





*
David, Ma,

Much more about agency in today's blog post and regarding robotics - https://scott-macleod.blogspot.com/2020/08/ecology-of-himalayas-wuas-news-q.html - with this definition in a way of agency: "In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance" (https://philpapers.org/rec/TOTFAA). So isn't this robot in the video in the blog post picking yellow green peppers exhibiting autonomy or agency in locating and picking the peppers without human assistance?

And wore a tartan tie for the first time with a checkered shirt in today's https://youtu.be/qbggnTOjlQA .0); (Nice to see you in the Cuttyhunk BoS meeting in Zoom this morning, Ma!:)

Fond regards,
Scott

--
- Scott MacLeod
- http://scottmacleod.com






* * *

Agency? (robot free will?) "In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance" https://philpapers.org/rec/TOTFAA - https://scott-macleod.blogspot.com/search/label/Agency #PhilosophicalAgency~

https://twitter.com/WorldUnivAndSch/status/1292867932762980352?s=20
https://twitter.com/scottmacleod/status/1292899518673870849?s=20
https://twitter.com/HarbinBook/status/1292906812119773184?s=20
https://twitter.com/sgkmacleod/status/1292908491460317184?s=20
https://twitter.com/WUaSPress/status/1292909077916299264?s=20
https://twitter.com/TheOpenBand/status/1292910176522006528?s=20


https://twitter.com/ValaAfshar/status/1292481743308685312?s=20



*
In very general terms, an agent is a being with the capacity to act, and 'agency' denotes the exercise or manifestation of this capacity. The philosophy of action provides us with a standard conception and a standard theory of action. From this, we obtain a standard conception and a standard theory of agency.

Agency

https://plato.stanford.edu/entries/agency/


Action

https://plato.stanford.edu/entries/action/


Ethics of Artificial Intelligence and Robotics

https://plato.stanford.edu/entries/ethics-ai/


Artificial Intelligence

https://plato.stanford.edu/entries/artificial-intelligence/


*

The Frame Problem

https://plato.stanford.edu/entries/frame-problem/


3. The Epistemological Frame Problem

Let's move on now to the frame problem as it has been re-interpreted by various philosophers. The first significant mention of the frame problem in the philosophical literature was made by Dennett (1978, 125). The puzzle, according to Dennett, is how “a cognitive creature … with many beliefs about the world” can update those beliefs when it performs an act so that they remain “roughly faithful to the world”? In The Modularity of Mind, Fodor steps into a roboticist's shoes and, with the frame problem in mind, asks much the same question: “How … does the machine's program determine which beliefs the robot ought to re-evaluate given that it has embarked upon some or other course of action?” (Fodor 1983, 114).
At first sight, this question is only impressionistically related to the logical problem exercising the AI researchers. In contrast to the AI researcher's problem, the philosopher's question isn't expressed in the context of formal logic, and doesn't specifically concern the non-effects of actions. In a later essay, Dennett acknowledges the appropriation of the AI researchers' term (1987). Yet he goes on to reaffirm his conviction that, in the frame problem, AI has discovered “a new, deep epistemological problem—accessible in principle but unnoticed by generations of philosophers”.
The best way to gain an understanding of the issue is to imagine being the designer of a robot that has to carry out an everyday task, such as making a cup of tea. Moreover, for the frame problem to be neatly highlighted, we must confine our thought experiment to a certain class of robot designs, namely those using explicitly stored, sentence-like representations of the world, reflecting the methodological tenets of classical AI. The AI researchers who tackled the original frame problem in its narrow, technical guise were working under this constraint, since logic-based AI is a variety of classical AI. Philosophers sympathetic to the computational theory of mind—who suppose that mental states comprise sets of propositional attitudes and mental processes are forms of inference over the propositions in question—also tend to feel at home with this prescription.
Now, suppose the robot has to take a tea-cup from the cupboard. The present location of the cup is represented as a sentence in its database of facts alongside those representing innumerable other features of the ongoing situation, such as the ambient temperature, the configuration of its arms, the current date, the colour of the tea-pot, and so on. Having grasped the cup and withdrawn it from the cupboard, the robot needs to update this database. The location of the cup has clearly changed, so that's one fact that demands revision. But which other sentences require modification? The ambient temperature is unaffected. The location of the tea-pot is unaffected. But if it so happens that a spoon was resting in the cup, then the spoon's new location, inherited from its container, must also be updated.
The epistemological difficulty now discerned by philosophers is this. How could the robot limit the scope of the propositions it must reconsider in the light of its actions? In a sufficiently simple robot, this doesn't seem like much of a problem. Surely the robot can simply examine its entire database of propositions one-by-one and work out which require modification. But if we imagine that our robot has near human-level intelligence, and is therefore burdened with an enormous database of facts to examine every time it so much as spins a motor, such a strategy starts to look computationally intractable.
Thus, a related issue in AI has been dubbed the computational aspect of the frame problem (McDermott 1987). This is the question of how to compute the consequences of an action without the computation having to range over the action's non-effects. The solution to the computational aspect of the frame problem adopted in most symbolic AI programs is some variant of what McDermott calls the “sleeping dog” strategy (McDermott 1987). The idea here is that not every part of the data structure representing an ongoing situation needs to be examined when it is updated to reflect a change in the world. Rather, those parts that represent facets of the world that have changed are modified, and the rest is simply left as it is (following the dictum “let sleeping dogs lie”). In our example of the robot and the tea-cup, we might apply the sleeping dog strategy by having the robot update its beliefs about the location of the cup and the contents of the cupboard. But the robot would not worry about some possible spoon that may or may not be on or in the cup, since the robot's goal did not directly involve any spoon.
However, the philosophical problem is not exhausted by this computational issue. The outstanding philosophical question is how the robot could ever determine that it had successfully revised all its beliefs to match the consequences of its actions. Only then would it be in a position safely to apply the “common sense law of inertia” and assume the rest of the world is untouched. Fodor suggestively likens this to “Hamlet's problem: when to stop thinking” (Fodor 1987, 140). The frame problem, he claims, is “Hamlet's problem viewed from an engineer's perspective”. So construed, the obvious way to try to avoid the frame problem is by appealing to the notion of relevance. Only certain properties of a situation are relevant in the context of any given action, so the counter-argument goes, and consideration of the action's consequences can be conveniently confined to those.
However, the appeal to relevance is unhelpful. For the difficulty now is to determine what is and what isn't relevant, and this is dependent on context. Consider again the action of removing a tea-cup from the cupboard. If the robot's job is to make tea, it is relevant that this facilitates filling the cup from a tea-pot. But if the robot's task is to clean the cupboard, a more relevant consequence is the exposure of the surface the cup was resting on. An AI researcher in the classical mould could rise to this challenge by attempting to specify what propositions are relevant to what context. But philosophers such as Wheeler (2005; 2008), taking their cue from Dreyfus (1992), perceive the threat of infinite regress here. As Dreyfus puts it, “if each context can be recognized only in terms of features selected as relevant and interpreted in a broader context, the AI worker is faced with a regress of contexts” (Dreyfus 1992, 289).
One way to mitigate the threat of infinite regress is by appeal to the fact that, while humans are more clever than today's robots, they still make mistakes (McDermott 1987). People often fail to foresee every consequence of their actions even though they lack none of the information required to derive those consequences, as any novice chess player can testify. Fodor asserts that “the frame problem goes very deep; it goes as deep as the analysis of rationality” (Fodor 1987). But the analysis of rationality can accommodate the boundedness of the computational resources available to derive relevant conclusions (Simon 1957; Russell & Wefald 1991; Sperber & Wilson 1996). Because it sometimes jumps to premature conclusions, bounded rationality is logically flawed, but no more so than human thinking. However, as Fodor points out, appealing to human limitations to justify the imposition of a heuristic boundary on the kind of information available to an inferential process does not in itself solve the epistemological frame problem (Fodor 2000, Ch.2; Fodor 2008, Ch.4; see also Chow 2013). This is because it neglects the issue of how the heuristic boundary is to be drawn, which is to say it fails to address the original question of how to specify what is and isn't relevant to the inferential process.
Nevertheless, the classical AI researcher, convinced that the regress of contexts will bottom out eventually, may still elect to pursue the research agenda of building systems based on rules for determining relevance, drawing inspiration from the past successes of classical AI. Whereupon the dissenting philosopher might point out that AI's past successes have always been confined to narrow domains, such as playing chess, or reasoning in limited microworlds where the set of potentially relevant propositions is fixed and known in advance. By contrast, human intelligence can cope with an open-ended, ever-changing set of contexts (Dreyfus 1992; Dreyfus 2008; Wheeler 2005; Wheeler 2008; Rietveld 2012). Furthermore, the classical AI researcher is vulnerable to an argument from holism. A key claim in Fodor's work is that when it comes to circumscribing the consequences of an action, just as in the business of theory confirmation in science, anything could be relevant (Fodor 1983, 105). There are no a priori limits to the properties of the ongoing situation that might come into play. Accordingly, in his modularity thesis, Fodor uses the frame problem to bolster the view that the mind's central processes—those that are involved in fixing belief—are “informationally unencapsulated”, meaning that they can draw on information from any source (Fodor 1983; Fodor 2000; Fodor 2008; Dreyfus 1991, 115–121; Dreyfus 1992, 258). For Fodor, this is a fundamental barrier to the provision of a computational account of these processes.
It is tempting to see Fodor's concerns as resting on a fallacious argument to the effect that a process must be informationally encapsulated to be computationally tractable. We only need to consider the effectiveness of Internet search engines to see that, thanks to clever indexing techniques, this is not the case. Submit any pair of seemingly unrelated keywords (such as “banana” and “mandolin”) to a Web search engine, and in a fraction of a second it will identify every web page, in a database of several billion, that mentions those two keywords (now including this page, no doubt). But this is not the issue at hand. The real issue, to reiterate the point, is one of relevance. A process might indeed be able to index into everything the system knows about, say, bananas and mandolins, but the purported mystery is how it could ever work out that, of all things, bananas and mandolins were relevant to its reasoning task in the first place.
To summarize, it is possible to discern an epistemological frame problem, and to distinguish it from a computational counterpart. The epistemological problem is this: How is it possible for holistic, open-ended, context-sensitive relevance to be captured by a set of propositional, language-like representations of the sort used in classical AI? The computational counterpart to the epistemological problem is this. How could an inference process tractably be confined to just what is relevant, given that relevance is holistic, open-ended, and context-sensitive?"



*



*








...





No comments:

Post a Comment

Note: Only a member of this blog may post a comment.