Inbox: - https://www.meaningcrisis.co/all-transcripts/ - Relevance realisation - Insight - Resource rationality - psychotechnologies - Active openmindeness - 4 kinds of knowing - propositional - practical - situational awareness, perspectival knowing, presence - participatory - most of the meaning making doesn't happen at propositional level. Answers come from the others. We have enough practices that address our propositional beliefs. We need to address the others. - [Cognitive Science in One Lesson by Luke Muehlhauser](http://commonsenseatheism.com/?p=13607) (rec by Anna Riedel) - Many connections with [[=Nietzsche]]. --- This RR concept seems like a bit of a grab bag. I am drawn to simpler definitions of intelligence e.g. ability to optimise for a desired future state. Maybe I should think of RR as an account of agency rather than just intelligence. 2025-02-26 Topic: being rational / do LLMs reason? LLMs aren't rational, because rationality = relevance realisation, and LLMs don't do that. Reject scaling hypothesis. They are currently parasitic on us. That's where their power comes from. Reasonable/rational/wise != intelligent. Alignment: make them care, make them wise. Alter is about practice, and role models. A certain kind of training. Alter essential for dealing with AI alignment?? - Reminds me of Eliezer, who setup LessWrong to train humans to be wiser in order to be better able to handle AI. - Or is the app about training data for the AI? Questions: Waymo isn't doing RR. What's missing, exactly? Transformer architecture surprisingly simple. Do you expect the mechanisms by which we do RR to turn out similarly simple? LLMs aren't running on axiomatic mechanisms. They're training. Trial and error. Minimise the loss. --- --- Diff approach: scientific intel. Meta problems. Intelligence as ability to ignore. Focus on RR. LLMs can't? "They only operate within the domain of literacy". <== really? Literacy correlate with anticipating the world. They anticipate how we would talk about our anticipation of the world. Training cost != inference cost. What did he just say about self driving cars?! RR requires caring, caring is grounded in real need. Chatgot doenst care what its processing (before pre-training). It can't care. Yeah, neither can humans before we're born. ! "Hallucination not solved" RR is a big grab bag. Ppl will shift away from self def by propositional intelligence towards embodiment and the ineffable. ## Anna interview Host: meaning is information that links entity to its context and directly affects its viability. Anna: information that makes a difference is that which is meaningful for the organism. Not view from nowhere. All information (meaning?) perspectival. Context, agent, maybe embodied...? Does not mean relativism. Real relationship between org and environment. Can lookk at this at many scales. Matter Level, mind level, culture level. PH: Active inference what is that. PH: evals vs this. Axiomatic rationality: truth. (Instrumental rationality: success.) Ecological rationality: adaptability to environment. Anna and John are somewhere in between axiomatic and ecological. Idealisations/abstractions make the world tractable, considering abstract optimal actions can help a lot e.g. playing poker game theory. ## Rationality and Relevance Realization (Riedl & Vervaeke) https://osf.io/preprints/vymwu/ ### PH thoughts Axiomatic a priori optimality not the essence of rationality. It's an important part, but it's only a subset. The more important part is **relevance realisation**—taking a mass of inputs and deciding what to attend to and what to ignore, deciding what frame(s) to think within. Getting some purchase on what is going on and how it matters given your interests. Vervaeke doesn't think we'll get an account of "how to determine relevance" but we can get an account of the mechanisms by which we determine relevance. As a gloss: it's a lot of training—trial, feedback. Plus a bunch of selection. Learning as selection process—in brains, cultures. > The main question of rationality, therefore, changes from a priori optimality to an ongoing optimal fittedness of an organism-environment system. > the success of any particular inductive logic is relative to the environment in which it is operational. For example, one could have an environment which is noisy, and so requires cautious induction, but if the environment contains little noise, it is beneficial to act less cautiously. What of it? Well, this conception naturalises rationality in a satisfying way. It fits well with what I know about [[Machine learning]]. Rationality is seen as a dynamic processs. The normativity emerges out of the selection process. See also: [[Pragmatism casts moral and ethical norms as ultimately about adaptive behaviour]] # Rationality & Relevance Realization (Riedl & Vervaeke, 2022) - Annotations the Great Rationality Debate 2.0 is taking place between the axiomatic approach to optimality modeling on one side and ecological rationality on the other. Traditionally, it is held that taking computational constraints of cognition into account, rational agents face a speed-accuracy trade-off. *Underline [page 1]:* speed-accuracy trade-off bounded rationality is limited both by internal cognitive constraints as well as the task environment. Examining heuristics through the bias-variance dilemma an organism faces in an unknown territory adds an efficiency-robustness trade-off. *Underline [page 1]:* efficiency-robustness trade-off continuously resolving the frame problem *Underline [page 1]:* the frame problem problem transformation, sense-making, abductive reasoning, or insight. The main question of rationality, therefore, changes from a priori optimality to an ongoing optimal fittedness of an organism-environment system. This implies a non-propositional perspective on cognition and a shift of the paradigm to enacted and embodied rationality *Underline [page 1]:* to an ongoing optimal fittedness of an organism-environment system. radical and unmeasurable uncertainty in the non-stationary world they inhabit, also known as Knightian uncertainty *Underline [page 1]:* radical and unmeasurable uncertainty in the non-stationary world they inhabit, also known as Knightian uncertainty the appliance of mathematical models and probability theory outside of small, abstracted, or stationary worlds in the real world exceeds the validity of the methods; an aspect already criticized by Keynes and the founding fathers of said methods. Similar notions of immeasurability of uncertainty were recently developed in the ecological critique of orthodox axiomatic rationality. being able to assess and determine what is relevant about a given situation comes first and is the most crucial cognitive skill in rationality this ongoing, procedural, and dynamical cognitive transformation of an uncertain environment – or “large world” (Savage, 1954) – into an abstracted, tractable “small world” is at the center of the phenomenon of rationality. in line with Dennett (1984), the debate between axiomatic rationality or optimality modeling on one side and ecological rationality on the other can be reexamined from the angle of understanding the rationality problem as solving the frame problem. We argue that the two approaches are not genuine opposites, but rather complementary to one another, studying the same phenomenon from different perspectives. Great Rationality Debate as a specific and singular event (Tetlock & Mellers, 2002; Stanovich, 2018), and others speak of the Rationality Wars in the plural (Samuels, Bishop & Stich, 2002). Understanding the debate as culminating in two peaks is possible, where the former being the Great Rationality Debate referred to by Tetlock and Mellers (2002) and Stanovich (2018) and the current one the Great Rationality Debate 2.0, which is the discussion by Chater et al. (2018) of Felin et al.’s target article (2017) and a general conflict between axiomatic and ecological rationality (Gigerenzer, 2019). A narrative that understands the debate as a singular ongoing process is equally reasonable. The defining inquery of the great rationality debate in the 20th century was whether humans are rational or not. The answer started with an axiomatic “yes”, by not even asking the question in the first place, ultimately turning into a “no” with the empirical advancements of the so-called heuristics and biases tradition. This tradition, in turn, was critically examined by the ecological approach to rationality. In recent decades, the central theme has shifted from asking whether humans are rational to asking the meta-question of “what even is the rationality question or problem”, as well as how to conceptualize and empirically approach it. This melting pot had its peak recently (Chater et al., 2018) when researchers from diverse backgrounds cognitive science, applied and experimental psychology, behavioral economics, and biology – published their perspectives on the argumentation by Felin, Koenderink, and Krueger (2017). In their assessment, Chater et al. (2018) offer a list of objections and complementations to their argument concerning rationality, perception, and the all-seeing eye (Felin, Koenderink, & Krueger, 2017). *Underline [page 3]:* Chater et al., 2018) The three underlying topics of the debate are, firstly, the notion of uncertainty, and whether to conceive it as probabilistically measurable or as radical uncertainty (Kozyreva & Hertwig, 2019; Kozyreva, Pleskac, Pachur & Hertwig, 2019; Kay & King, 2020). Secondly, the consequential limitations of the axioms of rationality, statistical foundations, as well as the notion of optimality and normativity (Brighton, 2019, Gigerenzer, 2019, Kozyreva & Hertwig, 2019; Hertwig, Pachur, & Pleskac, 2019). And thirdly,the assumptions about perception, are led away from an omniscient view (“from nowhere”) to first-person accounts. these prescriptions (e.g. Stanovich et al., 2016) ignore multiple problems that ultimately lead back to questioning the theoretical foundations of optimality modeling. These prescriptions usually presuppose a cost-free world: when taking both thinking effort and opportunity costs of delayed action into account, the biases are often Bayes optimal and accordingly cannot be naively improved upon Instead of assuming a mere speed-accuracy trade-off that implies a more effortful process to always be better while implying more costs, we argue for conceptualizing the trade-off additionally along another dimension, namely between efficiency and robustness more adaptionist view of ecological rationality. According to this position, science cannot as a matter of principle transcend our human perspective. One of the tools of scientific perspectivism is robustness analysis, as well as bootstrapping and triangulating conceptual models (Wimsatt, 2007). The central goal of a robustness analysis according to Wimsatt (2007), is the distinguishing of the real from the illusory; the phenomenon under study from any byproducts of modeling assumptions or perspective. This rests on the assumption that adding multiple perspectives or redundancy in observation increases reliability. Parallel organization is a fundamental principle of organic design and robust engineering this happens often under high urgency and time pressure as well as divided attention. internal decision problems, like how much to think, what further information to gather to decrease the uncertainty, how far to plan ahead, and even what to think about The choice of how best to approximate becomes a decision that is subject to the expected utility calculus itself maximum expected utility (MEU). This is computed for each action as the average utility of the action under consideration with respect to the probability of states of the world. The combinatorial complexity of real-world decision-making led to the heuristic models of bounded rationality by Herbert Simon in the late 1950s Russell & Subramanian (1995). They demonstrated that boundedness and optimality are not opposites at all. On the contrary, the two concepts need to go together if one wants to even approximate what “doing the right thing” in the real world could look like the authors suggested the property of bounded optimality (BO) in 1995. *Underline [page 7]:* bounded optimality Bounded optimality is defined as the solution to a constrained optimization problem presented to an agent by its architecture and the real-time task environment A bounded optimal agent behaves as well as possible given its computational resources in episodic real-time environments in which the utility of an action depends on the time at which it is executed. *Underline [page 7]:* the utility of an action depends on the time at which it is executed absolute efficiency is the aim, asymptotic efficiency is the game”, bounded optimality might only be useful as a philosophical landmark. The weaker form of asymptotic bounded optimality however can be used as a theoretical tool to actually make tractable the problems that universal real-time systems are facing resource rationality demonstrates the naiveté of a predefined set of prescriptions centered on more effortful thinking, more complex analysis, and mindware to increase rationality for an agent embedded in the real world. thinking longer cannot always be more rational, as the agent has to optimize a speed-accuracy trade-off. Figure 2: Resource rationality establishes a lowered normative standard for bounded rationality, by integrating knowledge about fundamental computational constraints into the unachievable ideal of Bayesian decision theory and logic The more realistic normative model also allows psychologists in the classical study of heuristics and biases to differentiate between actual systematic errors and biases that are only classified as such as an artifact of unrealistic standards, which can be traced back to Bayes-optimal heuristics.5 Many heuristics, while not intuitively Bayesian in their content, are Bayes optimal taking the cost of computation and opportunity cost into account. human rational behavior (and the rational behavior of all physical symbol systems) is shaped by scissors whose two blades are the structure of task environments and the computational capabilities of the actor”. *Underline [page 10]:* the computational capabilities of the actor” *Underline [page 10]:* the structure of task environments taking a speed-accuracy trade-off into consideration,reveals the assumption that agents have an a priori sense of what the right computation for a given problem is. Two concepts help to understand why there is no optimal way of thinking, independently of the environment. These are the bias-variance dilemma and the no-free lunch theorems we can shift to a notion of uncertainty that is a function of an dynamic organism-environment system, sometimes refered to as “fittedness” (Felin et al., 2017; Kozyreva & Hertwig, 2019). The reconceptualization puts the question of the frame problem at the center of rationality and intelligence. They argue that simple models, e.g., in marketing and finance, are sometimes on average more accurate, than more complex, sophisticated models Figure 3 The total error of a model is decomposed as bias 2+ variance + noise. Figure 3 shows visually the relationship between variance and bias. Figure 4 illustrates how bias and variance relate to overfitting, underfitting, and optimal model complexity. In judgment and decision-making under uncertainty, no clear calculus of risky choices can be applied. In a multi-agent environment, the way information is presented can mean additional pieces of information or active deception that requires decoding of the symbolic representation. All of this takes place in a reference narrative against which background action is judged. The economists Kay and King (2020) pointed out that under radical uncertainty, the most fundamental question is not about inductive or deductive reasoning, but abductive reasoning. The central question is about understanding "what is going on here". *Underline [page 15]:* the most fundamental question is not about inductive or deductive reasoning, but abductive reasoning. The central question is about understanding "what is going on here". Assuming a small world is a necessary condition (Savage, 1954) for applying Bayesian decision theory, although almost no real environment falls into this category (Binmore, 2009). as we are confronted with larger worlds, the importance of abductive reasoning increases. Abductive reasoning – which may sound unfamiliar – means searching for the best explanation of what we observe (King & Kay, 2020). When events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, this kind of reasoning is indispensable. The core of abductive reasoning is finding out what is relevant to a situation, so-called relevance realization. primary importance of subjective perception for rationality are made by Felin, Krueger, and Koenderink (2017). The most important skill for achieving rationality in the “large world” of “ill-posed problems” is therefore the ability to focus on relevant information and the relevant structure of the information. It is cognitively transforming the large world into an addressable problem formulation or. a “small world”. However, the smaller search space is only achievable by first considering the larger search space and dividing the relevant from the irrelevant. There can be no context-independent meta-theory of relevant information because relevance is context-sensitive. Instead of a theory of relevance, there needs to be a theory for a self-organizing mechanism for relevance realization. Second, the theory of the mechanism is neither representational nor syntactic, but economic. Third, instead of being instantiated by a completely general-purpose learning algorithm, it must involve competitors between multiple competing learning strategies. *Underline [page 17]:* There can be no context-independent meta-theory of relevant information because relevance is context-sensitive. Instead of a theory of relevance, there needs to be a theory for a self-organizing mechanism for relevance realization. Second, the theory of the mechanism is neither representational nor syntactic, but economic. Third, instead of being instantiated by a completely general-purpose learning algorithm, it must involve competitors between multiple competing learning strategies. integrating features such as frequency and invariance, rather than following a strict, context-insensitive rule. embodied dynamicism and the emerging framework. This more recent paradigm portrays cognition as a self-organizing dynamic system instead of a physical symbol system. Cognition and cognitive processes are no longer seen as outside the environment but considered to emerge from nonlinear and circular causality of continuous sensorimotor interactions between brain, body, and environment. Cognition is regarded to be an temporal phenomenon and needs a dynamic system perspective (Thompson, 2007). Relevance realization is then the underpinning of cognition as the exercise of skillful know-how in situated and embodied action, not reducible to prespecified problem-solving. analogous to the impossibility of a theory of biological fitness. It is not possible to create a fixed set of properties that constitute fitness overall. Depending on the circumstances and organisms discussed, the fittest properties may be completely different. Instead of being a fixed set of traits, fitness is a dynamic product of the organism-environment-system and its fittedness. While a theory of properties constituting fitness is impossible, there can indeed be the theory of the mechanism, in this case of natural selection, that brings forth biological fitness in a continuous process there cannot be a theory of relevance, but a theory of the cognitive mechanism for determining relevance. Instead of aiming for a theory of rationality, the question becomes of the mechanism that realizes rationality. *Underline [page 18]:* Instead of aiming for a theory of rationality, the question becomes of the mechanism that realizes rationality. relevance is never explicitly calculated by the brain at all, but that the phenomena that emerge as relevant, are a result of the brain’s attempt to dynamically balance economic requirements While there is a selective constraint to be as efficient as possible, this efficiency can lead to a loss of latent pre-adaptive functions with long-term value. Consequently, there is also an opposing constraint to be resilient and handle environmental perturbations with redundancy and variation. These oppositional processes are analogous to the bias-variance dilemma, or the sides of the conflict between optimality modeling and ecological rationality 5.1 Relocating the Rationality Debate into Cognition itself From this line of argumentation, we have arrived at the conclusion that while there is an ongoing debate between the approaches and solutions to the rationality question by the axiomatic and ecological approaches, the same conflict takes place in cognition. The conflict and trade-offs ultimately reflect a dynamic fight between different economic constraints and questions of efficiency and robustness in cognitive systems themselves. the essential part of rationality in the process of continuously transforming ill-posed problems (large worlds) into well-posed problems (small worlds) via relevance realization (Vervaeke, 2012; 2013). we relocate the validity of these methods into the socio-cultural-ecological ontological thicket *Underline [page 20]:* socio-cultural-ecological ontological thicket It is a kit of cognitive tools that can attain particular goals in particular worlds,” writes Steven Pinker about rationality The paradigm of cognitivism locates computation (e.g. computational rationality) inside the individual’s mind, instead of adequately identifying human computation as sociocultural activity. Computation does not reflect the properties of the individual but of the system in which the individual resides This does not mean ignoring the robustly empirical insights about cognitive processes in human cognition from the research built on said axiomatic assumptions (Ruggeri et al. 2021). Many prescriptions, notions of how to move closer to normative rationality, still hold Exactly because these tools are part of the operation of the sociocultural system the human is embedded in, their knowledge and application are often a crucial adaptation for a higher fittedness of the organism-environment in a world which exhibits features of modernity. This line of argumentation adds a new meaning to the book title “Rationality in the modern world” by Stanovich (2009), namely moving closer to the normative ideals of axiomatic rationality as a context-dependent adaptation to the properties of modernity Additionally, these points have implications for the philosophy of science, where it is overdue to let go of the “view from nowhere” (Vervaeke, 2019) and the omniscient Laplacean demon ## Thoughts Cognitive science and Buddhism. Cog sci and spirituality Vervake is a fan of [[Susan Wolf]] on meaning. He thinks she's right that we want connection to value things that are valuable independently of us. But her naturalism trips her up. ### M crises lecture 2: flow Flow as a state where we are training our intuitions. We want training environments that generate good Intuitions. Good implicit learning. Intuition example: how far away should you stand from someone? Axial age: Alphabet led to more widespread literacy and trained second order cognition, greater self awareness due to writing down thoughts. Also coinage, money encouraged abstract thought. ### M Crisis Lecture 47: Heidegger Truth as correspondence between statement and reality is grounded in a bunch of other stuff. Practical concerns, values, our nature and identity. Grounds are not merely subjective experience. The grounds are what makes experience possible. {So intuition and experience and subconscious... its this huge structuring function.} {Don't seek victory via one true theory.} Remember the forgotten mystery of Dasein. {Must keep the relationship to mystery alive. Don't just recognise it then jump back into your favourite frameworks. Keep a relationship to your frameworks that remains aware of the mystery of being.} {The stuff we are puts us in touch with things. Trust your taste.} {Enlightenment rationalism was a bit too strongly opposed to taste and intuition.} ### Jordan Peterson dialog Christianity is trying to integrate agape and logos. Love as accelerating mutual disclosure. Love is its own way of knowing, a kind of noticing, attending. Jerry Fodor makes a similar point to Derrida: the relevance of a proposition can't be captured within the syntax or the semantics of it. That's the main thing Derrida was on about. JP : love the best in me serving the best in you. JV: Is truth in the service of love? jV: answer to nihilism is not propositional, it's to relearn to rememebr to fall in love with reality,, to fall in love with being. Knowing how and knowing that neural circuitry is separate. Religion is not primarily about asserting propositions for which there is no evidence. Music is how we do serious play with our salience landscaping. [[=Heidegger]] critique of Nietzsche: he just inverted Christianity. Still relating to reality instrumentally. ### Anna Riedel interview https://www.youtube.com/watch?v=c6Fr8v2cAIw We've tried to distinguish humans by claim to be rational beings. Aristotle (rational animal) then reinforced by Descartes (we are logical / computational in nature). Notion of reasonable person in law. Relevance realisation is doing a lot of the work. The great rationality debate. John: Ideal concept of rationality is computationally impossible for a finite system. Computational approach runs into this problem of combinatorial explosion. Bounded rationality: **we have to first limit the problem space. Only after that can we use more algorithmic processes**. We zero in on relevant information, that capability is doing a lot of the heavy lifting. Book that influenced John Vervake: Minimal Rationality, Christopher Cherniak. Kahneman-Tversky develop axiomatic theory of rationality e.g. maximise expected utility, then measure distance from that. This approach critised by Gigarenzer. "Ecological Rationality" sometimes called "Fast and Frugal" approach. Central claim is about ecological validity: is it beneficial (adaptive) to do longer calculations? Not always. What actually works, what is actually adaptive? Two things to attend do: 1. Limited computational resources 2. Environment **We have to talk about what kind of environment the reasoner trying to be rational is in.** In certain environments, things that Kahnemen and Tversky call "biases" actually operate much better than the "ideal" algorithms that are coming out of the *a priori* theory. You can really show that in certain messy envronments characterised by great uncertainty, a "naive" heuristic will on average do better than the formal probability theory or decision theory kind of thing. It's more robust. There's an efficiency-robustness tradeoff. Bias-variance dilemma. Gigarenzer: the axiomatic approach to rationality has a "bias bias". "1 to n heuristic" for diversification; on the axiomatic approach its called the "naive diversification bias". When you have a lot of data in investing you can actually use the optimal solution. But if there's a lot of uncertainty then a simple diversification strategy on average will outperform a more complex one. This is a tradeoff between efficiency and robustness. Very important for any organism in the real world. #todo - what are they referencing here? Kahneman-Tversky spoke about a speed-accuracy tradeoff. We use heuristics because they're fast, but they are biased and fail sometimes, and we pay that price. When you can afford to take more time, you'll be more accurate. But what Gigarenzer is saying is **no**—that tradeoff is not always the case. There are many cases where computing for longer doesn't mean you'll get more accurate. Critical: you have to factor in the opportunity costs of further computation, but also the opportunity costs in the environment. Computational rationality when you are an agent embedded in an environment. Many heuristics in fact are optimal once you integrate those two things: cost of computation and opportunity costs. We need a realistic standard of optimality, e.g. **resource rationality**, instead of "perfect" optimality which is just not possible when you're embedded in a real life environment. You have to shift away from a framing of decision theory or quantitative analysis, because **the fundamental problem is overcoming an ill-defined problem, overcoming a frame problem to generate a well posed problem—and THEN you can have a numerical analysis**. **Relevance realisation**: relevance is a property that is central to all cognition. Combinatorially explosive amount of information outside of ourselves, within long term memory all the possible ways we could connect and access. If you tried to calculate all that you would never finish. But we're doing that right now. Somehow we ignore most of the irrelevant information, we shrink the problem space down so we are very often making the right connections, doing what's appropriate in the situation. And you also have a capacity for correcting that. I take the phenomenon of insight to be a case where you've done the shrinking of the problem, you've done the framing, but you've done it incorrectly, you've zeroed in on the wrong information. And then you have an "aha", you realise you were treating X as irrelevant when its not, or Y as relevant when its not. Process of relevance realisation is dynamic and self-correcting. Representational level can't be the level at which I generate relevance realisation—the representation is always dependent on the relevance, e.g. TV remote could be a weapon. [PH: compare Heidegger on ready-to-hand.] Logical / syntactical level: any attempt to specify a rule can't specify the conditions where the rule applies—infinite regress. Need a non-syntactic rule to apply that judgement. C.f. Wittgenstein. The relevance of a proposition is constantly varying even though logical structure is constant. We have to drop to a bioeconomic level, pay attention to cost of computation—not just metabolic but also economic. Brain is always trying to evolve how it constrains the problem space. It does this by process—we argue—analogous to evolution. Variation then selective pressure. If you're an organism, you don't want to be the variant mutant because you're going to die. But if there's no variant mutant the species can't evolve. So how do you solve that tension? You have degeneracy or robustness, a lot of overlap in the genome that doesn't make much difference in the phenotype, but as soon as you need the difference, the system can just shift and adapt suddenly. It's an evolutionary process that generates what is relevant. You can't make an *a priori* statement of what is relevant. **You can't optimally overcome framing problems.** Savage: Bayesian decision theory only applies to small worlds. Large world doesn't work at all. David Marr: Type 1 theories and Type 2 theories. Type 1 theory you can explain some mechanism. Type 2: process is its own simplest description. What is relevant? How can I create a well-posed problem? **The big question is: how does agent look at world as a complex problem and come to an insight to generate a well-posed problem, to which you can then apply cultural artefacts and tools like probability to? The algorithms are the soft problem of rationality, this the hard problem of rationality, so to speak.** What does this mean for improving human rationality day-to-day? Lots of insights from the axiomatic approach still apply: often a key thing is picking between system 1 and system 2. Correcting the automatic process. Step back, overcome your current framing. "Active openmindeness" -- #todo what is that?? **Probability theory, rational choice they are super central but they're not the essence of rationality. They are cultural artefacts, very powerful tools and we should teach them, but I think they are tools for insight.** Metaperspectival ability there is evidence that mindfulness practice enhance that. What other practices might help with training rationality? Anna: Role of self-knowledge. I will always see the world through who I am. The more I understand myself the more I can filter it out from how I perceive the world. E.g. maybe I am a suspicious person so I will constantly think maybe someone is plotting something. But I can remember, oh people told me I'm always kinda suspcious. So maybe its not a property of the world but it's a property of me looking at the world. Then I can overcome the misperception. John: I love this point. This was the original Socratic proposal. **At the core of Socratic rationality is Socratic self-knowledge, it is not your autobiography, it's more like your operating manual. What are your functions and how do they work.** A profound kind of self-knowledge is actually a factor for being rational in the way that you and I are talking about. John: **Elliot Paul and Agnes Callard have emphasised the way that rationality is a transformational process. You can't infer your way through it, you can't calculate your way through it. But we have to include that in our account of rationality because if the development of rationality is not itself a rational process you get into all kinds of performative self-contradictions. Callard calls it proleptic rationality. ** John: Self knowledge is not only retrospective,but it also has to be prospective, what kind of person am I but also what kind of person am I aspiring to be and how well is that process going. And that could also be a central thing people would need to have in order to be proper rational agents. What do you think of that? Anna: Sounds great but dunno. Piaget: learning as transformational, changing functions you're using, not just the information you're processing with existing functions. John: this is where topic of rationality blends into the topic of wisdom. People are getting profound self knowledge, increasing capacities for relevance and insight and reasoning. Think of what we mean by wise person: they have tremendous insight, they can zero in on the relevant information, they have profound self knowledge both retrospectively and prospectively, they aspire well. As Socrates said, they know what to care about. Tepo Filine... sp???? Point he makes: **obviousness is never part of the environment**. Different people see things very differently. He connects this to entrepeurship, the way that some people just see things where others don't. Original literature assumption: epistemic rationality vs instrumental rationality. Assumption: better that your beliefs are calibrated to actual strcuture of the world, better you are calibrated to actual reality. Donald Hoffman: Interface theory of perception. The way you perceive things, the way your attention works, it is for you being fit in the environment. **Often, non-veridical strategies dominate the fitness landscape.** E.g. overconfidence in many ways is the dominant strategy. **Instrumental rationality is the core, epsitemic rationality is just a tool to reach that.** **John: the great insight of pragmatism was that we don't just pursue truth, but relevant truth.** Gary Klein on expertise. Mention of [[=Mervyn King & John Kay]]. They have noticed this issue.