Actions

Difference between revisions of "Epistemology"

From Deliberative Democracy Institiute Wiki

(Social Objects)
(The phenomenological cage)
 
(23 intermediate revisions by the same user not shown)
Line 6: Line 6:
  
 
===The phenomenological cage===
 
===The phenomenological cage===
[[File:Brain in a Vat.png|200px|thumb|left|How can you know if you are not a brain in a vat?]]
+
[[File:Brain_in_a_vat2.png|200px|thumb|left|How can you know if you are not a brain in a vat? (created with Gemini AI)]]
  
Deliberation is a process of thoughtfully weighing options before making a decision. Yet in order to choose among the available options, we have to agree on the options available to us, and theןir outcomes. But many times we find ourselves in disagreement about the options or the ways the world behave. These disagreements are the results of differences in our understanding of how the world works. So in order to be able to better understand why we perceive the world differently, I suggest we have to understand how our knowledge is built, and why it is different from person to person.  This Page will try to explain how knowledge is built and why we perceive the world differently.
+
[[phenomenological cage]]
  
To explain knowledge, I will suggest that knowledge is a thing that is being used to understand, explain, predict and manipulate the inputs that come from the senses.  
+
Deliberation is a process of thoughtfully weighing options before making a decision. Yet, to choose among the available options, we have to agree on the options available to us and the outcomes. But often, we disagree about the options or how the world behaves. These disagreements result from differences in our understanding of how the world works. So, to understand better why we perceive the world differently, I suggest we understand how our knowledge is built and why it differs from person to person.  This Page will explain how knowledge is built and why we perceive the world differently.  
  
Thouse inputs that come from our senses, whether outer senses like smell, vision, hearing, touch, wormth, etc. or from our inner senses like therst, hounger, love, hate, etc are all will be called the phenomena. Phenomena, with correspondance to Kant's philiosophy, is our senses expriences.  
+
To explain knowledge, I will suggest that knowledge is a thing that is being used to understand, explain, predict, and manipulate the inputs that come from the senses.  
  
Through the usage of knowledge we try to understand our surroundings and inner feelings, but we have a an unobservable barriar to the surrounding or even our inner selves. We have no access to the surrounding itself. All our knowledge about the "surrounding" comes from our senses. We have pehnomena, but no access to the thing that creates the expreinces. For more than 2500 years of epistemology, [[The Historical overview of the phenomenological cage|nobody had found a reliable way to establish a relations between perception and the “surrounding”]]. To demonstrate the problem of the relations between knowledge and the surronding, we may use the thought experiment of the “[http://en.wikipedia.org/wiki/Brain_in_a_vat brain in a vat]”. In this thought experiment, you are asked to find a reliable way to know if you really exists as you perceive it, or you are actually a brain in a vat, which gets it's sensory inputs from a computer, that simulate the perceived world.
+
Those inputs from our senses, whether outer senses like smell, vision, hearing, touch, warmth, etc., or from our inner senses like thirst, hunger, love, hate, etc., will all be called phenomena. Phenomena, in correspondence to Kant's philosophy, are our sense's experiences.  
  
Till today nobody was able to find a reliable answer to this question. Philosophers sometime suggest that Hillary Putnaham had found a way, but his conclusions say otherwise. He concluded that we cannot distinguish between realty and virtual experience<ref>Putnam, H. (1981): "Brains in a vat" in Reason, Truth, and History, Cambridge University Press; reprinted in DeRose and Warfield, editors (1999): Skepticism: A Contemporary Reader, Oxford UP</ref>.
+
We try to understand our surroundings and inner feelings through knowledge, but we have an unobservable barrier to our surroundings or even our inner selves. We have no access to the surroundings. All our knowledge about the "surroundings" comes from our senses. We have phenomena but no access to the thing that creates the experiences. For more than 2500 years of epistemology, [[The Historircal overview of the phenomenological cage|nobody had found a reliable way to establish a relationship between perception and the “surrounding”]]. To demonstrate the problem of the relations between knowledge and the surroundings, we may use the thought experiment of the “[http://en.wikipedia.org/wiki/Brain_in_a_vat brain in a vat].” In this thought experiment, you are asked to find a reliable way to know if you exist as you perceive it or if you are a brain in a vat, which gets its sensory inputs from a computer that simulates the perceived world. As a result of our inability to go beyond our perceptions, I will call this principle '"The phenomenological cage principle'''.
 
 
Therefore, we have to set for now an axiom that says:
 
 
 
:“''We do not know the relation between our knowledge and the inner or outer-world''”. Or in Kant's methodology, we cannot know the noumenon.
 
 
 
The only thing we can say is that we percive. How this perception is constructed, I will suggest later own in this paper. as a result of our inability to go beiond our perceptions, I will call this principle '''"The phenomenological cage principle"'''.
 
  
 
===Axioms of Knowledge===
 
===Axioms of Knowledge===
 
+
To try to establish a relation between the phenomena and the noumena, I'll set some axioms that I believe can be the basis for such a relation.
====The Phenomena====
 
Kant described the inputs from the senses as “phenomena”.  
 
  
 
====Propensities in the phenomena====
 
====Propensities in the phenomena====
  
Knowledge is about tring to predict the phenomena. In the effort to predict I will suggest that we use the axiom that says that the phenomena are inputs from our senses which gets inputs from some source (Which Kant named neumena, or the thing in itself). We can "feel" the neumena by the following mechanism: When there is no input from the neumena, the phenomena will be random or blank, and when the neumena is interacting with the senses, it will create some propensities in the otherwise random or blank phenoemna<ref>[http://www.amazon.com/World-Propensities-Karl-Popper/dp/1855060000 Popper, K. (1997). World of Propensities (p. 60). Thoemmes Press.]</ref>. It will suggest that the neumena will create some order in the phenomena.
+
Knowledge is about trying to predict the phenomena. In the effort to predict, I will suggest that we use the axiom that says that the phenomena are inputs from our senses, which get inputs from some source (Which Kant named noumena, or the thing in itself). We can "feel" the neumena by the following mechanism: When there is no input from the neumena, the phenomena will be random or blank, and when the neumena is interacting with the senses, it will create some propensities in the otherwise random or blank phenoemna<ref>[http://www.amazon.com/World-Propensities-Karl-Popper/dp/1855060000 Popper, K. (1997). World of Propensities (p. 60). Thoemmes Press.]</ref>. It will suggest that the noumena will create some order in the phenomena.
  
This axiom may not describe the real relations between the phenomena and the neumena (If such exists), but it helps understand the way our system of creating knowledge works. The system assumes that such relation between phenomena and neumena exists, and all the following calculations will work based on this assumption.
+
This axiom may not describe the real relations between the phenomena and the noumena (If such exists), but it helps us understand the way our system of creating knowledge works. The system assumes that such a relation between phenomena and noumena exists, and all the following calculations will work based on this assumption.
  
 
====Spesific Phenomena====
 
====Spesific Phenomena====
I will also assume that these propensities in the phenomena creates destinguished specific phenomena. the destinguishable phenomena may be specific smells, sounds, feelings, touch, tastes etc. I will call it Destinguished Specific Phenomena (DSP).
+
I will also assume that these propensities in the phenomena create distinguished specific phenomena. The distinguishable phenomena may be specific smells, sounds, feelings, touch, tastes, etc. I will call it the Distinguished Specific Phenomena (DSP).
  
 
====Memory====
 
====Memory====
I will aslo assume that the mind can store past DSPs for a while.
+
I will also assume that the mind can store past DSPs for a while.
  
 
====Similarity and Induction====
 
====Similarity and Induction====
In the memory when a combination of two or more DSPs reoccurs, a connection in the memory is formed. For instance, if a smell<sub>x</sub> occur with specific sound<sub>y</sub>, and this combination reoccur several time while still in memory, then the memory might assume that the combination is eminent. This will be called induction, because from some limited set of co-occurrence, the mind assume that this co-occurrence will always reoccur. Of course the induction might not stand the next occurrence of either one of the DSPs. We will deal with this later.
+
When a combination of two or more DSPs reoccurs, a connection in the memory is formed. For instance, if a smell<sub>x</sub> occur with specific sound<sub>y</sub>, and this combination reoccur several time while still in memory, then the memory might assume that the combination is eminent. This will be called induction, because from some limited set of co-occurrence, the mind assume that this co-occurrence will always reoccur. Of course the induction might not stand the next occurrence of either one of the DSPs. We will deal with this later.
  
 
See also [http://en.wikipedia.org/wiki/Identity_%28philosophy%29 the problem of identity]
 
See also [http://en.wikipedia.org/wiki/Identity_%28philosophy%29 the problem of identity]
Line 79: Line 71:
  
 
=====Imaginary Mental Objects=====
 
=====Imaginary Mental Objects=====
Some times we have gaps between our mental objects. These gaps usually [[RPE|puzzle us]] and we look for an explanation or some intermediate MO that will closes the gap. Therefore we invent an unseen MO to bridge the gap. When we invent the bridging MO, we usually imagine it and we do not get if from phenomenological intervals (when we will further research, we will see that all MOs are imaginary mental objects), therefore we will call it Imaginary Mental Object or [[iMO]], while the objects obtained from phenomenological intervals will call Sensory Mental Objects([[sMO]]).
+
Sometimes we observe phenomena that could not be explained by our existing MOs. This gap usually [[RPE|puzzle us]] and we look for an explanation. This explanation is a set of MOs that we do not observe directly by our senses, but rather we invent it, to explain the unexplained phenomena. The invented MOs will be called Imaginary Mental Object or [[iMO]], while the objects obtained directly from the senses will be called Sensory Mental Objects([[sMO]]).
  
For instance, when the ancient looked upward and saw the sun travels through the skies, the where puzzled. The knew that every object that is traveling need something to pull or to thrust it, yet the sun seems to move without a mover. To solve it, they invented a "story" or a mental object that was missing. They told their hearers that the sun is pulled by invisible horses. The invisible horses where their iMO that bridged the gap.
+
For instance, when the ancients looked upward and saw the sun travels through the skies, they were puzzled. They knew that every object which is traveling need something to pull or to thrust it, yet the sun seems to move without a mover. To solve it, they invented a "story" or a mental object that was missing. They told their hearers that the sun is pulled by invisible horses. The invisible horses where their iMO that bridged the gap.
  
 
We use iMOs more then we recognize naturally. All our theories and especially the scientific theories are built of networks of iMOs. We only seldom see directly our iMOs. For instance iMO:atoms was [http://en.wikipedia.org/wiki/Atomic_theory#Early_modern_development talked about for more then 200 years in the modern science]. Yet not until recently people did not observe atoms. And even today when we "see" atoms it is through very complicated theories constucted into the scientific observation tools. We cannot observe atoms directly,and therefore atoms will stay iMOs.
 
We use iMOs more then we recognize naturally. All our theories and especially the scientific theories are built of networks of iMOs. We only seldom see directly our iMOs. For instance iMO:atoms was [http://en.wikipedia.org/wiki/Atomic_theory#Early_modern_development talked about for more then 200 years in the modern science]. Yet not until recently people did not observe atoms. And even today when we "see" atoms it is through very complicated theories constucted into the scientific observation tools. We cannot observe atoms directly,and therefore atoms will stay iMOs.
Line 95: Line 87:
 
==Advance  Asspects of Knowledge==
 
==Advance  Asspects of Knowledge==
 
===Simple and Complex Mental Objects===
 
===Simple and Complex Mental Objects===
Simple MOs is a MOs which are only constructed from inductions. Complex MOs are MOs which are constructed from sub-MOs, and therefore creates mechanism to the Complex MO. For instance, a simple MO may be "the earth attracts heavy bodies" and a complex MO may be "between particles with masses there is a flux of ''gravitons'' that attract the masses according to the low of gravity. Therefore any body with mass will attract any other body with mass, due to the attraction of the ''gravitons''". When we try to explain a MO, we try to make it complex. Sometimes we are unable to further explain a MO, and therefore we will call the Axioms. But it is possible that in the future, when we gain more undersanding we may be able to explain the basic MOs and make them comlex. hence, ''explanation' is the procees of making a MO more coplex.
+
Simple MOs is a MOs which are only constructed from inductions. Complex MOs are MOs which are constructed from sub-MOs, and therefore creates mechanism to the Complex MO. For instance, a simple MO may be "the earth attracts heavy bodies" and a complex MO may be "between particles with masses there is a flux of ''gravitons'' that attract the masses according to the low of gravity. Therefore any body with mass will attract any other body with mass, due to the attraction of the ''gravitons''". When we try to explain a MO, we try to make it complex. Sometimes we are unable to further explain a MO, and therefore we will call the Axioms. But it is possible that in the future, when we gain more undersanding we may be able to explain the basic MOs and make them comlex. hence, ''explanation' is the procees of making a MO more complex.
 +
===Inheritance of Objects===
 +
Hierarchical knowledge is built upon heritage of properties. For instance  living creature has specific set of properties:
 +
*living creatures
 +
**DNA/RNA as their main genetic storage
 +
**Replicate
 +
Animals are sub-group of living-creatures that inherited the properties of the living-creatures and has some more properties.
 +
 
 +
*Animals
 +
**living-creatures properties
 +
***DNA/RNA as their main genetic storage
 +
***Replicate
 +
**breath Oxygen
 +
 
 +
In some cases there might be partial inheritance
 +
 
 +
*Animals
 +
**living-creatures properties
 +
***DNA as their main genetic storage
 +
***Replicate
 +
**breath Oxygen
  
 
===Research===
 
===Research===
 
Look at [http://e-book.lib.sjtu.edu.cn/iupsys/Proc/Bruss2/bpv2ch05.htm Anderson] and Collins for semantic networks<ref>Collins, A. , & Quillian, M. (1969). Retrieval times from semantic memory. Journal of Verbal Learning and Verbal Behavior, 8, 241–248.</ref> or associative semantic networks. and for "concept maps"<ref>[http://cmap.ihmc.us/publications/researchpapers/theorycmaps/theoryunderlyingconceptmaps.htm Joseph D. Novak & Alberto J. Cañas, The Theory Underlying Concept Maps and How to Construct and Use Them, 2008]</ref>
 
Look at [http://e-book.lib.sjtu.edu.cn/iupsys/Proc/Bruss2/bpv2ch05.htm Anderson] and Collins for semantic networks<ref>Collins, A. , & Quillian, M. (1969). Retrieval times from semantic memory. Journal of Verbal Learning and Verbal Behavior, 8, 241–248.</ref> or associative semantic networks. and for "concept maps"<ref>[http://cmap.ihmc.us/publications/researchpapers/theorycmaps/theoryunderlyingconceptmaps.htm Joseph D. Novak & Alberto J. Cañas, The Theory Underlying Concept Maps and How to Construct and Use Them, 2008]</ref>
 +
===Joins===
 +
Temporal joins of objects probably happen in the working memory (For description on the neural mechanism of Working memory see this reference <ref>[https://pdfs.semanticscholar.org/702f/39118f36bc5b23f28ea145b747bcd7b160e6.pdf Constantinidis, Christos, and Torkel Klingberg. "The neuroscience of working memory capacity and training." Nature Reviews Neuroscience (2016).‏]</ref>). It seems that the joins are usually done base on the synaptic strength. Therefore the most rehearsed networks will be more available (This seems to be the mechanism of [[system 1]]).  In case of [[ADD]] it seems that the the brain will have hard time to maintain a flow. It seems also that there is some mechanism that help the brain focus on specific objects. This is neede to be able to think about specific issues.
  
 
==Other Asspects of Knowledge==
 
==Other Asspects of Knowledge==

Latest revision as of 06:40, 5 March 2024

framless

This page is a stub. It is not ready for publication and is used to aggregate information about a subject. You can add further reading and add information to the page. If you want to prepare this page for publication please consults with the creator of this page.
Tal Yaron 14:08, 20 February 2013 (IST)

framless

This page was writen by a non-English speeking writer. Please help us improve the quality of the paper.Tal Yaron 14:08, 20 February 2013 (IST)

The Foundations of Knowledge

The phenomenological cage

How can you know if you are not a brain in a vat? (created with Gemini AI)

phenomenological cage

Deliberation is a process of thoughtfully weighing options before making a decision. Yet, to choose among the available options, we have to agree on the options available to us and the outcomes. But often, we disagree about the options or how the world behaves. These disagreements result from differences in our understanding of how the world works. So, to understand better why we perceive the world differently, I suggest we understand how our knowledge is built and why it differs from person to person. This Page will explain how knowledge is built and why we perceive the world differently.

To explain knowledge, I will suggest that knowledge is a thing that is being used to understand, explain, predict, and manipulate the inputs that come from the senses.

Those inputs from our senses, whether outer senses like smell, vision, hearing, touch, warmth, etc., or from our inner senses like thirst, hunger, love, hate, etc., will all be called phenomena. Phenomena, in correspondence to Kant's philosophy, are our sense's experiences.

We try to understand our surroundings and inner feelings through knowledge, but we have an unobservable barrier to our surroundings or even our inner selves. We have no access to the surroundings. All our knowledge about the "surroundings" comes from our senses. We have phenomena but no access to the thing that creates the experiences. For more than 2500 years of epistemology, nobody had found a reliable way to establish a relationship between perception and the “surrounding”. To demonstrate the problem of the relations between knowledge and the surroundings, we may use the thought experiment of the “brain in a vat.” In this thought experiment, you are asked to find a reliable way to know if you exist as you perceive it or if you are a brain in a vat, which gets its sensory inputs from a computer that simulates the perceived world. As a result of our inability to go beyond our perceptions, I will call this principle '"The phenomenological cage principle.

Axioms of Knowledge

To try to establish a relation between the phenomena and the noumena, I'll set some axioms that I believe can be the basis for such a relation.

Propensities in the phenomena

Knowledge is about trying to predict the phenomena. In the effort to predict, I will suggest that we use the axiom that says that the phenomena are inputs from our senses, which get inputs from some source (Which Kant named noumena, or the thing in itself). We can "feel" the neumena by the following mechanism: When there is no input from the neumena, the phenomena will be random or blank, and when the neumena is interacting with the senses, it will create some propensities in the otherwise random or blank phenoemna[1]. It will suggest that the noumena will create some order in the phenomena.

This axiom may not describe the real relations between the phenomena and the noumena (If such exists), but it helps us understand the way our system of creating knowledge works. The system assumes that such a relation between phenomena and noumena exists, and all the following calculations will work based on this assumption.

Spesific Phenomena

I will also assume that these propensities in the phenomena create distinguished specific phenomena. The distinguishable phenomena may be specific smells, sounds, feelings, touch, tastes, etc. I will call it the Distinguished Specific Phenomena (DSP).

Memory

I will also assume that the mind can store past DSPs for a while.

Similarity and Induction

When a combination of two or more DSPs reoccurs, a connection in the memory is formed. For instance, if a smellx occur with specific soundy, and this combination reoccur several time while still in memory, then the memory might assume that the combination is eminent. This will be called induction, because from some limited set of co-occurrence, the mind assume that this co-occurrence will always reoccur. Of course the induction might not stand the next occurrence of either one of the DSPs. We will deal with this later.

See also the problem of identity

Simplicity

The induction between two DSPs might be very simple. They may be linked directly. For instance, a sound of a broken glass may be directly connected to a look of a braking glass, because the Neumann that creates these two specific DSPs, is always affect the phenomena in the same pattern. But it can be connected in more complex ways, for instance the Neumann that creates these two DSPs may be a computer that creates a breaking sound, whenever he creates a visual DSP of breaking glass. Because the Neumann can change its effect on DSPs, the induction may describe well the relation between the DSPs until sometime in the comming observations, and it may be not describe the relation in the future[2]. The reason to choose the simplest induction is because the number of available possible relations between a sound and a breaking glass are infinite. Therefore for reasons of effective storage, we will use to use the simplest induction which is the minimum usage of information to describe maximum relations of DSPs. In our case it might be "All breaking glasses has this specific sound".

Refutation

For the sake of economic usage of memory storage we will assume the simplest inductions between DSPs. Yet, in the coming phenomena we may notice that the induction does not describe well the relation between the DSPs. We might find that although we have the induction "smellx always come with soundy", we might notice that in the coming observations soundy do not follow smellx. This will cause refutation of the induction, and we will have to try to create a new induction or forget the refuted induction. If we want to suggest a more complex induction we may look at our stored relations between DSPs, and see that when smellx was not followed by soundy, a touch-feelingz did occur just before the appearance of smellx. Therefore we may suggest the following induction " smellx always come with soundy unless touch-feelingz occur just before smellx".

From the sets of stored memory about realtions of DSPs we may suggest many different inductions that may describe the occurence of DSPs, but yet again for the sake of economical storage, it will be better to use the simplest induction possible to describe maximum observable relations between DSPs.

This Process of suggesting induction, refutatios and resuggesting a more complex induction, is part of the continued process of creating knowledge.

Structures of Knowledge

Mental Objects

Senssory Mental Objects
construction of Mental Object by recurrence of several sensory stimuli simultaneously

Somtimes the brain finds several connections between diffrent DSPs, that come toghther. When several connections are reoccuring they form a cluster of inductions. I call this cluster of induction a Mental Object (MO). It is an object that sotres many inductions that relate to eache other. For instance a specfic smell may come always when specific shape and specifice color and specific feeling. A smell of an apple may come with a shape of an apple, a color of an apple and a taste of an apple. This cluster of inductions will create for us a femiler object that when ever we observ one of it's DSP, we will assume that all other DSPs will appaer also. If we will also hear the same name ("apple") every time we will see some of the connected appreances, we will also give him a name: "apple"[3] that will be part of the mental object.

To be continued...

Refutation of a mental object:apple
One of the infnite ways to reconstract the MO:apple

Mental Objects are compused of inductions and as every induction in the phenomenological cage, they should be put to a test. Therefore MOs may change and reconstracted. For instance, all the apples we have seen so far were red. Sudenly I see a shape of an apple, I smell a smell of an apple and it has a feeling of an apple, but it's color is green and it's taste is diffrent. What is it? I have expreinced a refutation of my expected apperances by the old MO:apple. As in every induction, when it is refuted, we have to find new set of inductions that will describe all appreance we have seen so far. The posiblities to describe all phenomena is infinte, and again we will look for the simplest sets of connections that will describe all abservations. One simple rearngments will be to say that the old observations and the new observationare are two diffrent objects. Another simple reconstruction will be to say that all my observation relating this MO:apple are of the smae object but diffrent of kinds. In the former the connection to red and sweet (This is the taste of the red object), will be on mental object, while the the green and the soure will be another object. Both objects have the same shape and the the same touch and the same smell, but different colors and tastes. In the later, they are all apples, and apples have purticular shape, feeling and smell, but there is a kind of red-sweet appels and there is a kind of green-sour appels. of course there may be other ways to to reconstruct mental objects. We can say that all apples are red, but sombody painted and weetend this last observation of an apple, or this was an accident, freak of nature and so on.

To decide ifbetween alternatives we may need some more avidance. Further evidance may suggest that we should divied it to two diffrent mental objects. For instance if our theoris will be sufficiantly developed to the point when we recognize a "DNA" thus we may find that they have a DNA linage that is very diffrent, and because we defined DNA as an assntial proprtie of a leaving object, we may decide that they are two diffrent mental objects.

Deductions
We duduce by going through connected MOs

Mental Objects construct our understanding of the world. When we reason, we can go along the lines of connected MOs. For instance, humans have many inductions. We know that they have two legs, two arms, they leave up to 120 years, and in the meanwhile they breath, born and die, and many other inductions (mistakenly called proprties). We also know (or conjecture) that Socrates was human, and as such he has all the inductions of the MO:humans.

When we use the clasical sylogism "Socrats was human, all humens are mortal, therfore Socrates is mortal" we go along the lines that connect the MOs. By this mechanism we may say that Socrates was also born. If we want to speculate how old was his mother when she gave birth to Socrates we need some other inductions and mental object. we need to conjectures in what ege women can give birth (age 12-45), and that humans are born by womens and that that woman is his mother. Therefore we can conculde that Socrates was born when his mother was at the age of 12-45, and we therfore conclude that his mother is 12-45 years older then Socrates.

framless

This section needs more clarifications. Please explein:
More ilustrations will be very helpfull. If equations can be enterd, this will be awsome.
Tal Yaron 09:13, 14 December 2012 (IST)

Imaginary Mental Objects

Sometimes we observe phenomena that could not be explained by our existing MOs. This gap usually puzzle us and we look for an explanation. This explanation is a set of MOs that we do not observe directly by our senses, but rather we invent it, to explain the unexplained phenomena. The invented MOs will be called Imaginary Mental Object or iMO, while the objects obtained directly from the senses will be called Sensory Mental Objects(sMO).

For instance, when the ancients looked upward and saw the sun travels through the skies, they were puzzled. They knew that every object which is traveling need something to pull or to thrust it, yet the sun seems to move without a mover. To solve it, they invented a "story" or a mental object that was missing. They told their hearers that the sun is pulled by invisible horses. The invisible horses where their iMO that bridged the gap.

We use iMOs more then we recognize naturally. All our theories and especially the scientific theories are built of networks of iMOs. We only seldom see directly our iMOs. For instance iMO:atoms was talked about for more then 200 years in the modern science. Yet not until recently people did not observe atoms. And even today when we "see" atoms it is through very complicated theories constucted into the scientific observation tools. We cannot observe atoms directly,and therefore atoms will stay iMOs.

People who do not engage epistemology are not awere of the extant of our usage of iMOs. Hume was probably the first thinker that showed the we take for granted things we do not see. He demonstrated in his book An Enquiry Concerning Human Understanding[4], that cause and effect are not to be observed directly, yet people use them as though they are real enteties. God is iMO, and also The force of gravity, and even the state is an iMO (Have any of you ever seen a state?).

Feelings Mental Objects

Some of our senses are inner senses. They connect inner senses like love, fun, sadnes, sorow, time and space, to other objects. Space-Time are usualy used to describe relations between MOs or MOi. E.g.; "Dan is behind John".

Social Objects

Social Objects are mental objects that two or more people agreed that they have the same inductions. Agreement can be achieved by talking about the mental objects, comparing their inductions and concluding that the mental object, each hold in his mind has the same inductions as the mental object the other holds in his mind. For instance, if I meet a guy that talks about "koob" and he say that "koob" has a lot of pages, and something is written on the papers, I can tell him that I know that "book" has the same inductions (properties in classical logic). We may further comper the inductions of "koob" and "book" and if we will find them all the same we may conclude that "koob" and "book" are the same. We may call it "tome", and this will be a reference word for object that has the same inductions in both of our minds, as far as our investigation revealed.

Advance Asspects of Knowledge

Simple and Complex Mental Objects

Simple MOs is a MOs which are only constructed from inductions. Complex MOs are MOs which are constructed from sub-MOs, and therefore creates mechanism to the Complex MO. For instance, a simple MO may be "the earth attracts heavy bodies" and a complex MO may be "between particles with masses there is a flux of gravitons that attract the masses according to the low of gravity. Therefore any body with mass will attract any other body with mass, due to the attraction of the gravitons". When we try to explain a MO, we try to make it complex. Sometimes we are unable to further explain a MO, and therefore we will call the Axioms. But it is possible that in the future, when we gain more undersanding we may be able to explain the basic MOs and make them comlex. hence, explanation' is the procees of making a MO more complex.

Inheritance of Objects

Hierarchical knowledge is built upon heritage of properties. For instance living creature has specific set of properties:

  • living creatures
    • DNA/RNA as their main genetic storage
    • Replicate

Animals are sub-group of living-creatures that inherited the properties of the living-creatures and has some more properties.

  • Animals
    • living-creatures properties
      • DNA/RNA as their main genetic storage
      • Replicate
    • breath Oxygen

In some cases there might be partial inheritance

  • Animals
    • living-creatures properties
      • DNA as their main genetic storage
      • Replicate
    • breath Oxygen

Research

Look at Anderson and Collins for semantic networks[5] or associative semantic networks. and for "concept maps"[6]

Joins

Temporal joins of objects probably happen in the working memory (For description on the neural mechanism of Working memory see this reference [7]). It seems that the joins are usually done base on the synaptic strength. Therefore the most rehearsed networks will be more available (This seems to be the mechanism of system 1). In case of ADD it seems that the the brain will have hard time to maintain a flow. It seems also that there is some mechanism that help the brain focus on specific objects. This is neede to be able to think about specific issues.

Other Asspects of Knowledge

Bush and Mosteller Learning Curve

This section was mostly taken from Paul W. Glimcher1 paper in PNAS[8]

This very general idea of Conditional learning, was first mathematically formalized when Bush and Mosteller[9], 24) proposed that the probability of Pavlov's[10] dog expressing the salivary response on sequential trials could be computed through an iterative equation where

Bush and Mosteller eq1.jpg


In this equation, Anext_trial is the probability that the salivation will occur on the next trial (or more formally, the associative strength of the connection between the bell and salivation). To compute Anext_trial, one begins with the value of A on the previous trial and adds to it a correction based on the animal's experience during the most recent trial. This correction, or error term, is the difference between what the animal actually experienced (in this case, the reward of the meat powder expressed as Rcurrent_trial) and what he expected (simply, what A was on the previous trial). The difference between what was obtained and what was expected is multiplied by α, a number ranging from 0 to 1, which is known as the learning rate. When α = 1, A is always immediately updated so that it equals R from the last trial. When α = 0.5, only one-half of the error is corrected, and the value of A converges in half steps to R. When the value of α is small, around 0.1, then A is only very slowly incremented to the value of R.

What the Bush and Mosteller[11][12] equation does is compute an average of previous rewards across previous trials. In this average, the most recent rewards have the greatest impact, whereas rewards far in the past have only a weak impact. If, to take a concrete example, α = 0.5, then the equation takes the most recent reward, uses it to compute the error term, and multiplies that term by 0.5. One-half of the new value of A is, thus, constructed from this most recent observation. That means that the sum of all previous error terms (those from all trials in the past) has to count for the other one-half of the estimate. If one looks at that older one-half of the estimate, one-half of that one-half comes from what was observed one trial ago (thus, 0.25 of the total estimate) and one-half (0.25 of the estimate) comes from the sum of all trials before that one. The iterative equation reflects a weighted sum of previous rewards. When the learning rate (α) is 0.5, the weighting rule effectively being carried out is an exponential series, the rate at which the weight declines being controlled by α.

Bush and Mosteller eq2.gif

When α is high, the exponential function declines rapidly and puts all of the weight on the most recent experiences of the animal. When α is low, it declines slowly and averages together many observations, which is shown in Fig. 1.

Bush and Mosteller graph1.gif

Fig 1: “Weights determining the effects of previous rewards on current associative strength effectively decline as an exponential function of time”[13].


The Bush and Mosteller equation was critically important, because it was the first use of this kind of iterative error-based rule for reinforcement learning; additionally, it forms the basis of all modern approaches to this problem. This is a fact often obscured by what is known as the Rescorla–Wagner model of classical conditioning[14]>. The Rescorla–Wagner model was an important extension of the Bush and Mosteller approach to the study of what happens to associative strength when two cues predict the same event. Their findings were so influential that the basic Bush and Mosteller rule is now often mistakenly attributed to Rescorla and Wagner by neurobiologists.

References

<references>
  1. Popper, K. (1997). World of Propensities (p. 60). Thoemmes Press.
  2. Goodman, N., & Putnam, H. (1983). Fact, Fiction, and Forecast, Fourth Edition (p. 160). Harvard University Press.
  3. Wittgenstein, L. (1973). Philosophical Investigations (3rd Edition) (p. 250). Pearson.
  4. Hume D. (1748) An Enquiry Concerning Human Understanding
  5. Collins, A. , & Quillian, M. (1969). Retrieval times from semantic memory. Journal of Verbal Learning and Verbal Behavior, 8, 241–248.
  6. Joseph D. Novak & Alberto J. Cañas, The Theory Underlying Concept Maps and How to Construct and Use Them, 2008
  7. Constantinidis, Christos, and Torkel Klingberg. "The neuroscience of working memory capacity and training." Nature Reviews Neuroscience (2016).‏
  8. Glimcher, P. W. (2011). Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis. Proceedings of the National Academy of Sciences of the United States of America, 108 Suppl (Supplement_3), 15647–54.
  9. Bush RR, Mosteller F (1951) A mathematical model for simple learning. Psychol Rev 58:313–323
  10. Pavlov IP (1927) Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex (Dover, New York).
  11. Bush RR, Mosteller F (1951) A mathematical model for simple learning. Psychol Rev 58:313–323
  12. Bush RR, Mosteller F (1951) A model for stimulus generalization and discrimination. Psychol Rev 58:413–423.
  13. Schoenbaum G, Roesch MR, Stalnaker TA, Takahashi YK (2009) A new perspective on the role of the orbitofrontal cortex in adaptive behaviour. Nat Rev Neurosci 10:885–892.
  14. Rescorla RA, Wagner AR (1972) in Classical Conditioning II: Current Research and Theory, A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement, eds Black AH, Prokasy WF (Appleton Century Crofts, New York), pp 64–99.