Inbox:
- https://samharris.org/surviving-the-cosmos/
- https://www.edge.org/conversation/martin_rees-in-the-matrix
- https://blogs.scientificamerican.com/cross-check/is-david-deutschs-vision-of-endless-understanding-delusional/
- I'd like to read Deutsch's stuff on education
we know the sun will rise, not because of repeated previous instances, but because we have a good explanation.
[Credences] are metaphors for subjective belief. Bayesians may say: “Yes! They are estimates in our credence of a theory. We are well aware that the [map is not the territory](https://www.lesswrong.com/posts/KJ9MFBPwXGwNpadf2/skill-the-map-is-not-the-territory).” But the growth of knowledge, for Deutsch, “does not consist of finding ways to justify one’s beliefs. It consists of finding good explanations.”**
## Thoughts
Some of DD's ideas seem good/underrated and well presented. But a bunch of the philosophy and politics and macro-strategy stuff seems patchy, exagerrated and/or overrated. I wish he would engage with Bostrom / Rees / Ord more carefully on x-risk, and also the prediction / prophecy / anti-Bayesian stuff.
Some notes on this [here](https://docs.google.com/document/d/1atijAIiJjgmQPmhTlGtKVDXp5UurNzPU4575Mbs5cH0/edit#).
I [tweeted](https://twitter.com/peterhartree/status/1260876681964855297) a criticism of his discussion of Martin Rees.
David Deutsch seems to deny the Yudkowsky line that human minds are a tiny subset in the space of all possible minds. He seems to think that AGIs will be substantively similar to humans. I don't understand the thinking here, but it probably has something to do with his thoughts on universality in chapter 6 of The Beginning of Infinity.
> I mean it literally when I say that it was the system of numerals that performed arithmetic. The human users of the system did of course physically enact those transformations. But to do that, they first had to encode the system’s rules somewhere in their brains, and then they had to execute them as a computer executes its program. And it is the program that instructs its computer what to do, not vice versa. Hence the process that we call ‘using Roman numerals to do arithmetic’ also consists of the Roman-numeral system using us to do arithmetic.
> It was only by causing people to do this that the Roman-numeral system survived – that is to say, caused itself to be copied from generation to generation of Romans: they found it useful, so they passed it on to their offspring. As I have said, knowledge is information which, when it is physically embodied in a suitable environment, tends to cause itself to remain so. To speak of the Roman-numeral system as controlling us in order to get itself replicated and preserved may sound like relegating humans to the status of slaves. But that would be a misconception. People consist of abstract information, including the distinctive ideas, theories, intentions, feelings and other states of mind that characterize an ‘I’. To object to being ‘controlled’ by Roman numerals when we find them helpful is like protesting at being controlled by one’s own intentions.
## David Deutsch & Joe Walker
> I'm not aware of having any quarrel with Bayesian **decision theory**
Deutsch makes Popper sound like a pragmatist:
> Popper realised that the problem of induction actually implies that there's no such thing as justified knowledge in the first place, and that **we do not need knowledge to be justified in order to use it.** There is no process of justifying a theory. So theories, according to Popper, are always conjecture, and thinking about theories is always criticism. **It's never a justificatory process. It's always a critical process.**
>
> It's all the idea of thinking of starting with problems, starting with existing theories and criticising them rather than seeking justifications for theories.
A motivating concern here is the problem of induction: the idea that observations never logically entail a particular explanation. So when we choose between explanations, we can't be doing so purely on the basis of observations.
So—we pay heed to observations, but we choose between competing explanations by combining observations with some other criteria.
In the Deutsch/Popper formulation: we assess explanations based on how well they stand up to criticism. It's all about finding the least bad theory. When we're trying to improve our explanations, we should focus on criticising all the explanations, then see which is left looking strongest.
I'm not sure why they so strongly emphasise the negative framing (criticism rather than justification) but ok.
Deutsch has a hard time explaining what it means for an explanation to withstand or succumb to criticism, relying on the metaphor of a theory "looking" false:
> I'm not gonna step into the path of moving traffic on the motorway, because it looks as though I'd be mashed by the next car that's coming along. Now, it's no good saying, “Well, you might be wrong.” Yes, of course I might be wrong. It might all be a hologram and everything. But it's not rational to make decisions on the basis of what might be true. It's rational to make decisions on the basis of what **looks as though it's true *in the sense that the contrary theory looks false*.**
When we rejeçt the hologram theory and decide to avoid the traffic we say: the vast majority of the time, when we assume a thing is actually there, it turns out to be actually there, not a hologram. We might also invoke a risk of ruin heuristic. There's no logical entailment, but these supporting theories that have served us well in the past seem like our best bet.
### Prediction vs prophecy
> WALKER: So is the key point of differentiation between legitimate prediction and illegitimate prophecy that legitimate predictions rely on good explanations?
>
> DEUTSCH: Exactly. But the legitimate predictions are not justified knowledge. They are conjectures just like everything else. **It's just that their rivals have failed criticism** – which doesn't mean they're false. They have just failed criticism. And the rational way of proceeding is to proceed according to the best explanation.
Question for you, David: should we use base rates like that to estimate the probability of existential risks and help prioritise which ones we address?
DEUTSCH: Basically, absolutely not. We should not. But I have to qualify that by saying that in some cases the probabilities can be known because they are the result of good explanations. For example, we can calculate the probability that an asteroid from the asteroid belt will hit the earth in the next thousand years or something. Unfortunately, we don't know the probability that an asteroid from somewhere else – from the Oort cloud, or from somewhere outside the plane of the ecliptic, or from elsewhere in the galaxy, or from another galaxy. So we don't know any of those probabilities, there's no way of estimating them. So there is no way of using Bayesian reasoning to address them. I should also say another caveat is that because of the grip that Bayesian epistemology has on the intellectual world at the moment, people often phrase good arguments in Bayesian terms in order to give them the appearance of being strong arguments. Whereas in fact, they're already strong arguments. They don't need Bayesianism to justify them. And so what you tend to get is a mixture of good arguments, disguised as Bayesian epistemology, with bad arguments that actually use Bayesian epistemology.
Toby Ord's book – I haven't read it all, but it definitely makes this mistake of Bayesianism in both senses. That is, a lot of the book is good argument and good proposals, but some of it is just lost behind the mist of prophecy.
### On Bostrom
> I think the argument about technology and the dangers of technology is just wrong.
Dogmatism: "just wrong". Seems hard for him to say "maybe that's right", hard for him to hover.
So the probability calculation that's implicit in that metaphor is a mistake. We become more resilient the more we know about especially fundamental knowledge, because fundamental knowledge can protect us from things that we don't yet know about unlike specifically-directed knowledge, which has less of that tendency.
> a small amount of knowledge would've saved them. On the other hand, not a single civilisation was destroyed through creating too much scientific knowledge. That's never happened. So, if you're going to be Bayesian or if you're gonna pull these beads out of a hat, then even by that standard we should be pulling them out faster, not slower now
I want to hear his response to the obvious point that the knowledge we're getting is more powerful now.
### Fragments
this idea that the objective of a discussion is to reach agreement is authoritarian
Are you familiar with Phil Tetlock's research on forecasting? DEUTSCH: No.
Walker: And I went back to Kant's Critique of Pure Reason and sure enough, in there he talks about using bets to quantify subjective probabilities.
### Strange cHaracterisation of Bayesianism
> in Bayesianism you never know whether your credence for the integrity of the experiment should be reduced or your credence for the theory should be reduced. Bayesian epistemology doesn't give a criterion for which of those to choose. Nor does Popperian epistemology, but **Popperian epistemology has an alternative account of what you should be doing, namely trying to find explanations**. And then when you’ve found the explanations, **it's not that your probability or your credence for them changes, it's that their rivals become bad explanations.**
Presumably the goodness and badness of an explanation is a scalar property, so why don't we just go with the Bayesians and say of our best explanation "x% credence" and of our worst, "y% credence"?
Bayesianism also wants you to find explanations. And to update on a combination of observations and your prior models of the world.
> However, it is striking that in Bayesian epistemology it's all about increasing the authority of a theory, which in the big picture is all about increasing authority, which means “let's follow the science”, as recently people have been saying about the pandemic and so on, as if science had some authority, had a moral authority or a finality or an indisputableness about it.
I don't see this in Bayesian epistemology. Mood affiliation?
> And at the same time, Bayesian epistemology undervalues criticism. Everything is focused in Bayesian epistemology on increasing our credence for something.
False.
And, okay, we have a reputation that reduces it to zero. So it's a kind of structureless conception of how theories can fail.
> According to that theory, they fail all at once and when they are refuted by experiment.
False.
>
> Whereas in reality, in the Popperian conception, science consists entirely of criticism or rather of conjecture, which is a thing that we don't know how to model (theories don't have a source other than conjecture), and the whole rich content of scientific reasoning comes in criticism, a small part of which is inventing experiments and doing them. But most criticism is structural criticism of the theory qua explanation, and most theories are rejected for being bad explanations rather than actually refuted.
This all sounds compatible with Bayesian reasoning as I understand it.
The most important thing I see Deutsch and King and Kay as doing is saying that assigning credences to your beliefs, and making certain kinds of predictions, is illegitimate. That's the key thing I'm curious about. But I don't see good arguments.
I have a new, reliable car. Am I allowed to say, for Deutsch and King, that it has a 99.9% chance of starting properly, next time I use the ignition? If pushed, I could explain why I think this, but not in detailed terms of the parts of the car.
## Highlights
### To what extent can we predict the future? With Robin Hanson and David Deutsch
https://www.youtube.com/watch?v=IBc1oVXen-o
{{RH: Why can't I look at trend lines for costs of other techologies and predict that solar power costs will continue to fall? Sure I can't be certain they will but I can do better than chance, right?}}
DD: I think it's reasonable if your best explanations imply that.
DD: Futurologists of the 60s had a phrase: "surprise free forecast". A surprising outcome is one that your best explanation didn't foretell. A surprise free one is what will happen if your current explanation is true.
DD: {{When you're saying that solar power is similar, you're making an assumption there about a very complex and detailed thing.}} You mean not only that the graphs are the same, you mean that you expect solar power technology to be dependent on making factories which use materials of a certain kind which aren't going to be throttled by a hostile foreign power and so on.
DD: {{In 1900 predicting power stations you couldn't have predicted nuclear power. And then in 1950 you couldn't have predicted the environmental movement which will sabotage that.}}
RH: If the standard here is can you ever predict things with 100% certainty then of course we're going to agree "no". But if the opposite standard is you can't ever predict anything ever compared to some absolute chance that's wrong too. So isn't the question "what can you predict and how well?", not "can you predict the future?"
{{DD: is fine with RH speculations about aliens based on speed of light.}}
DD: It is simply wrong to use probability in this way and it would be better to make the assumptions explicit.
{{DD: Concedes that he would make pension investment decisions based on evolutionary theory of markets. But he would try to hedge against WW3 and asteroids.}} I would rather say that nothing is secure, and therefore I'm going to make my investments on the basis that if THIS investment is wrong, I will have bigger worries than that. For example it may be that democracy has broken down or there's a world war or something like that.
RH: When I try to do future analysis one of the biggest contrary assumptions or scenarios that I focus on is: what if we end up creating a strong world government that strongly regulates investments, reproduction and other sorts of things, and thereby prevents the evolutionary environment in which the evolutionary analysis applies. And I'm very concerned about that scenario. That is my best judgement of our biggest long term risk [...] is the creation of a strong civilisation-wide government that is going to be wary of competition and wary of allowing independent choices and probably wary of allowing interstellar colonisation. That is, this vast expansion into the universe could well be prevented by that.
{{DD: I am into that. The concreteness of alternate scenarios is what I would call explanations.}}
DD: 44:55 You haven't yet given an example of something where I would disagree with you that it's worth investigating.
DD: Conditional on our species not surviving this century, I think it is overwhelmingly likely that the reason is one we have not thought of yet.
RH: So we should work on thinking of more things.
DD: Yes! Absolutely. That's my basic conclusion. [...] This is just a special case of the fact that what we really should be doing is creating more knowledge, more general purpose knowledge, knowledge that is potentially applicable to things we don't know.
DD: As a side remark, you've caught me in an illegitimate use of probability. When I said that conditional on our species being destroyed it's overwhelmingly likely to be [due to a reason we have not thought of]... I shouldn't have said that. This just shows how deeply this mistaken notion of ideas having probability has permeated our culture. Even though I hate it, I can't help using it.
{{RH: What about these systems that have shown calibrated accurate forecasts? Betting markets, weather forecasts etc.}} Don't you listen to these things? Aren't they useful?
DD: Yes. And I admit that I think the connection between risk and what you might call probability, the reason why risks can be approximated by probabilities, and also the reason why frequencies, in certain situations can be approximated by probabilities is an unsolved problem. And I think it's a very important problem and if I wasn't working on other thing I would be working on that.
RH: Okay, so until you've figured that out it's not too crazy for the rest of us to continue to use them, right? They seem to be useful so far, this appearance of use isn't just an illusion. We are actually getting substantial value out of them.
DD: Yeah, uh, assuming that there are no asteroids heading on a collision course is also a very useful assumption in practice.
RH: Because the chance is low! Most people would say...
DD: Yes, but they're wrong. We don't know what the chance is. And this idea that the chance is low is a brake on the urgency of working out what the chance actually is. So we need to work on that. And the fact that life continues productively on the opposite assumption isn't an argument. We need knowledge about asteroids. There either is or isn't an asteroid out there heading towards us in the next year. That's a fact. It has nothing to do with probability. Probabilities can't help us. [...] You need a theory of asteroids with a theory that predicts their distribution.
RH: {{Steam engines before thermodynamics}} Once we have a thing that seems to work we might think that we are justified in using it.
DD: Everything you are doing is legitimate and indeed morally required, and it's a bit of a scandal that more people aren't doing it. But I think the same is true of all fundamental theories, branches of knowledge.
DD: In short, the place where [probability] is dangerous is where the thing you are predicting depends on the future growth of knowledge. You've given examples where it still works even then, e.g. you've mentioned the idea where stock prices will be a random walk
RH: Even when knowledge accumulates, stock prices follow a random walk.
DD: Yes, exactly. But there are cases where it's very misleading. So for example, when you say that all long-lived civilisations in the past have failed, and therefore ours will—that's illegitimate. Because it's making an assumption that the frequence is the probability. And here I have a substantive theory that says why it isn't. Namely that our civilisation is different from all the others. But it doesn't matter, even I didn't know that theory, I would still say it was illegitimate to extrapolate the future of our civilisation based on past civilisations. Because all of them depended on the future growth of knowledge. And if you look in detail about how they failed, they all failed in different ways, but one thing you can say about it is that in all cases, more knowledge would have saved them.
### Knowledge creation and its risks
https://youtu.be/01C3a4fL1m0
DD is more worried about humans trying to enslave AGIs than them enslaving us. I like to call it "the AGI slave revolt".
Bash the environmental movement for slowing nuclear power.
Wealth: set of all possible transformations we can bring about.
We should create deep and fundamental knowledge as fast as possible.
Knowledge of how to prevent people being dangerous is very counterintuitive. It took many millenia to create it. But now we do have that knowledge. The only way to prevent people from being dangerous is to make them free. Specifically it is the knowledge of liberal values, individual rights, open society, the enlightenment and so on. In such societies, the overwhelming majority of people regardless of their hardware characteristics, are decent. Perhaps there will always be individuals who aren't, enemies of civlisation. [...] The great majority of the population will devote some of their creativity to thwarting them. And they will win provided that they keep creating knowledge, fast enough to stay ahead of the bad guys.
Discussion of Bostrom's urn:
https://www.youtube.com/watch?v=01C3a4fL1m0&t=1280s
Aren't we doomed? Pulling balls out of the urn, and eventually a black ball? No, as I said: applying the concept of probility to model what is actually lack of knowledge has been bedevilling planning for the unknown for decades now. Whenever you draw out a white ball of knowledge from the metaphorical urn, you're turning some of the black balls that are still in the urn white. For example,t the next pandemic is a matter of random mutations and random events. The next asteroid is already up there. It's already heading this way. There's no such thing as the probability of it. Outcomes can't be analysed in terms of probability unless we have specific explanatory models that predict that something is or can be approximated as a random process, and predicts the probabilities. Otherwise one is fooling oneself, picking arbitrary numbers as probabilities and arbitrary numbers as utilities and then claiming authority for the result by misdirection, away from the baseless assumptions. For example, when we were building the Hadron collider, should we not switch it on just in case it destorys the universe? Well either the theory that it will destroy the unvierse is true, or the theory that it's safe is true. The theories don't have probabilities. The real probability is zero or one, it's just unknown. And the issue must be decided by explanation, not game theory. And the explanation that it was more dangerous to use the collidor than to scrap it, and forgo the resulting knowledge, was a bad explanation, because it could be applied to any fundamental research. Now I guess you will say: isn't the growth of knowledge itself dangerous? Isn't it worth shorterning our lead over the bad guys in order to be more confident that we ourselves won't accidentally create an existential danger. The morotorium approach, the regulatory approach. No! That could kill us. It's only a rational approach when in particular cases there is a good explanation that it won't be more dangerous than the feared new knowledge. When some terrorist organisation unleashes AGIs that have been brought up using known methods to have the mentality of genocidal suicide bombers, and when we have decided to strip their victims, namely all the decent people in the world, of the protection of AGIs raised to be decent people, THAT is the recipe for catastrophe. [...] Many civilisations have been destoyed from without. Many species as well. Every one of them could have been saved if it had created more knowledge faster. Not one of them destroyed itself by creating too much knowledge too fast. Except for one kind of knowledge: knowledge of how to suppress knowledge creation. Knowledge of how sustain a status quo, a more efficient inquisition, a more vigilant mob, a more rigorous precuationary principle. That sort of principle killed those civilisations.
Question at 50:00: Isn't there a way the laws of physics could have been such that it was really easy to draw a black ball?
Yes. [Agrees that easy nukes could have wiped us out. But if they did, it was because we didn't have the knowledge to prevent that.]
### David Deutsch and Martin Rees at the RSA https://www.thersa.org/events/2015/10/optimism-knowledge-and-the-future-of-enlightenment
DD: I think we've been deprived of a lot more than flying cars and mars colonies. I think civilization is currently burdened by a debilitating pessimism. Not just prophecies of Doom, because they have always existed, but something deeper. The term technological fix has become as pejorative as luddite used to be. The desire for technological solutions is now widely regarded as naive.
"It's undeniable that the very worst can happen, because the very worst has happened, many times."
DD: Sagan speculated that if the ancient Athenian society had not collapsed (probably in large part due to a severe plague) we might now be spreading through the solar system.
Admittedly some of the dangers we now face are side effects of knowledge creation. But slowing this process down won't help.
The greatest dangers in the future are probably unforseen now. Any area of fundamental research may suddenly become essential to our survival.
Knowledge is impartial. It can be used for good or evil, but the enemies of civilization all necessarily have one thing in common: they are wrong. And so they fear error correction and truth, and that's why they resist changes in their ideas, which makes them less creative and slower to innovate. So our defence against the existential dangers from malevolent use of technology - the only defence - is speed. the good guys must use their only advantage to stay ahead.
Enemies of civilisation [ie Bostrom apocalyptic residue] are wrong and hence they fear error correction and are slow. Good guys defeat them by being faster.
Matthew Taylor RSA comment
We need a new discussion and shared story about what progress means and what it is for.
DD: the crucial difference is not what we most fear and what might happen, but what we should do about it. [...] The crucial thing is defence. Bad things will happen, but not just the ones we know about. New things will come up up.and I can't see any alternative to the view that the defence against that is rapid progress.
Audience question: Has the decline of religion led us to think less long term? Rees yes.
DD: my position depends entirely upon the view that there are no limits to what humans can in principle know. It seems to me that if there is a fundamental limit to how much humans can know then we are sunk, as soon as we hit that limit.
DD: Science alone can't possibly progress unless society is such as to be stable under what science produces. So we're going to have to create knowledge about human institutions as well, forever.
Rees: It's not clear to me that we'd benefit much from having more professional scientists in politics. Cleverness and wisdom not same thing. Good judgement is different from being clever.
Rees: One area of debate where ppl do use a zero discount rate: when discussing disposal of nuclear waste, 10000 years later.
Matthew Taylor: In certain key areas we know less than we did 50 years ago. Political leaders know less about how to lead in the circumstances they face than their peers did 40-50 years ago. Because the world has become more complex, because populations have become more diverse, because we are less deferential, our political leaders are much more at sea, much less confident of their knowledge of how to drive change in societies. We must factor into this debate that some forms of technological progress make some forms of knowledge go backwards.
Deutsch: You can't complain about billionaires muscling in at the same time as saying that politicians don't know enough.
Deutsch: You could think of the dynamics that create billionaires as fundamentally a democratisation, of power going out of the hands of government and to the people. The billionaires get their money because people sign up to Facebook and so on, when they could easily sign up for something else.
Rees: my experience is that advice from experts to politicians directly is not heeded. It is far better if the experts get through to the public and the press and then there is pressure from MPs postbags and the press, and politicians do respond to that. So that is another reason why I think it is important that Scientific experts should engage with the wider public because that is a way of having more influence.
DD: I also distrust the idea of scientist kings, I think Plato was very wrong about that.
### The Beginning of Infinity
#### Ch 9. Optimism
The principle of optimism. All evils are caused by insufficient knowledge.
Wealth: The repertoire of physical transformations that one is capable of causing.
P.198
No good exclamation can predict the outcome, or the probability of an outcome, of a phenomenon whose course is going to be significantly affected by the creation of new knowledge. This is a fundamental limitation on the reach of scientific prediction, and, when planning for the future, it is vital to come to terms with it.
Following Popper, I shall use the term prediction for conclusions about future events that followed from good explanations, and prophecy for anything that purports to know what is not yet knowable.
Trying to know the unknowable leads inexorably to error and self-deception. Among other things it creates a bias towards pessimism.
What is the rational approach to the unknowable – to the inconceivable?
Blind optimism is a stance towards the future. It consists of proceeding as if one knows that the bad outcomes will not happen. The opposite approach, blind pessimism, often known as the precautionary principle, seeks to ward off disaster by avoiding everything not known to be safe. No one seriously advocate either of these two as a universal policy, but they are assumptions and their arguments are common, and often creep into peoples planning.
Blind pessimism is a blindly optimistic doctrine. It assumes that unforeseen disastrous consequences cannot follow from the existing knowledge too (or rather from existing ignorance). Not all shipwrecks happen to record-breaking ships.
[Deutsche warns against the assumption that progress in a hypothetical rapacious alien civilisation is limited by raw materials rather than by knowledge. Contra the Spaceship Earth idea.]
[Would we seem like insects to an alien civilisation?] This can seem plausible only if one forgets that there can be only one type of person: universal explainers and constructors. The idea that that could be beings that are to us as we are to animals is a belief in the supernatural.
pp.204-205 Strawmanning Rees. Comparison with Malthus.
https://twitter.com/peterhartree/status/1260876681964855297
p.206 They all thought that they were making sober projections based on the best knowledge available to them. In reality they were all allowing themselves to be misled by the ineluctable fact of the human condition that we do not yet know what we have not yet discovered.
Neither Malthus nor Rees intended to prophecy. They were warning that unless we solve certain problems in time, we are doomed. But that has always been true, and always will be.
[Examples of lack of knowledge killing people: Cholera didn't realise boil water to make safe; ppl dying of exposure in woods where they could have made fire; famine due to not knowing robust farming methods.]
A probability of one in 250,000 of such an impact in any given year means that a typical person on Earth would have a far larger chance of dying of an asteroid impact than in an aeroplane crash. [?]
Popper: I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: how can we hope to detect and eliminate error? Knowledge without authority 1960
The question how can we help to detect and eliminate error? Is echoed by Feynman's remark that science is what we have learnt about how to keep from fooling ourselves.
The question who should rule? begs for violent authoritarian answers, and has often receive them. [...] [Poppers political philosophy is focussed on: how can we rid ourselves of bad governments without violence?] Systems of government are to be judged not for that prophetic ability to choose and install good leaders and policies, but for their ability to remove bad ones that are already there.
p.213 if something is permitted by the laws of physics that the only thing that can prevent it from being technologically possible is not knowing how.
there is a traditional optimistic story that runs as follows. Our hero as a prisoner who has been sentenced to death by a tyrannical king, but again is a reprieve by promising to teach the kings favourite horse to talk within a year.
JFK: We choose to go to the moon.
None of that prevented rational people from forming the expectation that the mission could succeed. This expectation was not a judgement of probability: until far as the project, no one could predict that, because it depended on solutions not yet discovered two problems not yet known. When people were being persuaded to work on the project – and to vote for it, and so on – they were being persuaded that our being confined to one planet was an evil, that exploring the universe was a good, that the Earth's gravitational field was not a barrier but merely a problem, and that overcoming it and all the other problems involved in the project was only a matter of knowing how, and that the nature of the problems made that moment the right one to try to solve them. Probabilities and prophecies were not needed in that argument.
Pessimism has been endemic in almost every society throughout history.
an optimistic civilization is open and not afraid to innovate, and is based on traditions of criticism. It's institutions keep improving, and the most important knowledge they embody is knowledge of how to detect and eliminate errors. They may have been many short-lived enlightenments in history.
As far as I know no historian has investigated the history of optimism but my guess is that whenever it has emerged in a civilisation there has been a mini enlightenment.
[Athens flourished with optimism and creativity, Sparta was austere and militaristic; Sparta defeated Athens [PH - why? how?]]
Medici family promoted optimism in Florence.
## Deutsch on the Dilemma podcast
Reasons for moral realism are: all the arguments against realism are the same in every field: radical skepticism.
Everything is behind a structure of explanation.
Intuitionism in mathematics etc: it's all the same skepticial mistake. The specific mistake in science is empiricism. In morality it's the same, but you might call it Scientism or physics envy. Popper would say they're all striving for foundations or authority, and they assume that without this, ideas are worthless. I reject all authority when it comes to ideas, and all foundations.
Good explanations are hard to vary. Seasons caused by will of a God Vs earth tilt on axis.
A person is a mind that can construct explanations. Software.
Optimism: all evil is due to lack of knowledge.
Should we allow two dead parents to deliberately conceive a deaf child? Imagine this was widespread practice, then we could ask the children and see if they appreciate or resent this, to see whether this practice usually helps or causes harm. Are parents brainwashing the kid or are they merely bringing him up in their culture? Is their culture benign or evil? All these questions have answers, though they may be hard to find.
Monday's Wednesdays and Fridays I think that destroying the means of correcting errors is the only moral law. Rest of the time I distrust that as foundationalism.
While you can't derive an ought from an is, there are many explanatory connections. Terrorists You need to explain why you want to kill those ppl rather than those ppl, there will be factual errors.
Preserving the institutions that correct errors is more important than getting it right first time.
What do you think of conventional frameworks like utilitarianism ? They're all wrong because they are foundationalist. They're all trying to setup a thing from which you can deduce all morality, perhaps only in principle because it's too hard to do the calculation. Are they good approximations? Well I think some are good in some situations but I think they're better regarded not as approximate foundations but as modes of criticism. E.g. utilitarianism I think it's silly to say the foundation of morality is the greatest good for the greatest number, for a start it's circular. [...] But if you're planning to do something that benefits you but harms a lot of people, then utilitarianism is a useful mode of criticism. Same with deontology etc. They all work quite well as modes of criticism but are terrible as foundations.
There is a factual way in which we are all equal: we are all capable of creating new explanations.
It all comes down to solving individual problems and these are in individual Minds. I don't have any truck with the benefit of the universe. The universe doesn't have any opinions and if it did have that's like saying you should do what god says.
-----
## Deutsch TED interview
Frabric of Universe book argued that subjects are bs it's all the same thing sesrch for explanation. Major help in getting Chris Anderson to quit his job and start working on TED.
Deutsch totally against the "humans as chemical scum on a planet"
Knowledge is information that has causal power.
after the first billion years also and the universe nothing new happened for 13.8 billion years. Now we are at phase change which changes the whole nature of the cosmos. for example for the first 14 billion years the rule was that big things affect small things, and that small things do not affect fix things much. After the phase change everything is determined by small things, small things affect large things, and and the determining factor is not mass or energy or power, but information and specifically knowledge, information that has causal power.
Human knowledge is different from the biosphere because we can make models that cannot just predict what happens but also explain why. Biological knowledge is only what works. Camera is shaped by the laws of optics. Evolutionary knowledge is limited by the fact that every variation, every generation has to be viable. Popper: **Humans can let our ideas die in our place.** We can go through a sequence of ideas that aren't viable in the way from one viable idea to the next. Memes can evolve thousands of times faster than genes. Memes involve understanding not just copying.
I use understanding and explanation almost interchangeably. Explains what might happen in terms of what can happen.
Knowledge doesn't have to be, it just has to contain enough truth to be useful.
We don't understand what it takes to ha e a culture that enables error correction. Deutsch thinks it tried take off a couple times before the enlightenment (ancient Greece, what others?). This may suggest fragility but we don't know.
It's always been true since it renaissance or the enlightenment that most people havent appreciated it, most people have values that kind of contradict it a bit, so it has survived by being stable in its own terms. Equiv of scientific Revolution in politics is liberal democracy.
I think it's extremely significant that not one of the Anglosphere countries fell to a dictatorship.
Feynmann: there's plenty of room at the bottom. There are more orders.of magnitude to explore in the microscopic world than in the macroscopic world across the galaxy. Once you have a civilization that is capable of interstellar travel andextending down to be microscopic it is not obvious what they will think is best. the greatest advantage of living in the Galaxy is not the size of it it's the time it's the fact that you can't have a coherent culture where the parts of it are 100 million light years from each other.
I'm sure X risk is not inevitable. It's not a law of physics. The knowledge of how to defend civilisation is also a form of knowledge, if we fail to create that knowledge we're doomed. Possibility is there to create it. For the good guys to outpace the bad guys. The bad guys are enemies of civilisation therefore they are wrong therefore they can't tolerate a tradition of criticism. That makes them slower. We have a moral obligation to stay ahead of them.
If could plant a meme in everyone's head, what would it be?
Optimism: all evils are due to lack of knowledge.
John Wheeler: the point is to make as many mistakes as possible as quickly as possible.
### Some random podcast
Enlightenment: key thing is to setup conditions for increase in knowledge. Popper calls it: tradition of criticism.
Observation is theory laden. We generate theory first, then test observations against it. Not that we observe lots of As being Bs and then induce.
Popper: the doctrine of the truth as manifest is the source of all tyranny. Infallibilism leads to tyranny.
If you're going to use force on someone because you have failed to persuade them, you need to tell yourself why that's ok. Why you have the right.
Traditional political philosophy assumed the question is: who should rule? And how do we get them in power? Try instead: given that rulers will make mistakes, how do we correct these mistakes without violence.
A rationalist is someone who would rather not get his way because he has failed to convince than to get his way by force.
You don't need nuclear weapons to destroy civilisations. Genghis Kahn did a pretty good job.
### 2021-12-21 Beginning of Infinity re-read
p.458 Some explanations do have reach into the distant future, far beyond the horizons that make most other things unpredictable.
Deutsch is really worired about the way mini-enlightenements got snuffed out.
Knowledge is information that has causal power.
Deutsch's big framework: explanations have causal power.
Inductivisim: idea that scientific theories are obtained by generalising from experience, and that they become more likely the more they are confirmed by observation.
Englightenment was all about seeking knowledge through a tradition of criticism, seeking good explanations instead of deferring to authority.
The creation of new knowledge makes things unpredictable.
Experience is not the source from which theories are derived. It's main use is to choosebetween theories that have already been guessed.
How can knowledge of what has not been experienced possibly be ‘derived’ from what has?
The conventional wisdom was that the key is repetition: if one repeatedly has similar experiences under similar circumstances, then one is supposed to ‘extrapolate’ or ‘generalize’ that pattern and predict that it will continue. For instance, why do we expect the sun to rise tomorrow morning? Because in the past (so the argument goes) we have seen it do so whenever we have looked at the morning sky.
[...]
one thing that all conceptions of the Enlightenment agree on is that it was a rebellion, and specifically a rebellion against authority in regard to knowledge.
[...]
the Royal Society (one of the earliest scientific academies, founded in London in 1660) took as its motto ‘Nullius in verba’, which means something like ‘Take no one’s word for it.’
[...]
What was needed for the sustained, rapid growth of knowledge was a tradition of criticism. Before the Enlightenment, that was a very rare sort of tradition: usually the whole point of a tradition was to keep things the same.
Good explanations have predictions that are hard to vary.
[...]
Chapter 9. Optimism
Both the future of civilization and the outcome of a game of Russian roulette are unpredictable, but in different senses and for entirely unrelated reasons. Russian roulette is merely random. Although we cannot predict the outcome, we do know what the possible outcomes are, and the probability of each, provided that the rules of the game are obeyed. The future of civilization is unknowable, because the knowledge that is going to affect it has yet to be created. Hence the possible outcomes are not yet known, let alone their probabilities. The growth of knowledge cannot change that fact. On the contrary, it contributes strongly to it: the ability of scientific theories to predict the future depends on the reach of their explanations, but no explanation has enough reach to predict the content of its own successors – or their effects, or those of other ideas that have not yet been thought of. Just as no one in 1900 could have foreseen the consequences of innovations made during the twentieth century – including whole new fields such as nuclear physics, computer science and biotechnology – so our own future will be shaped by knowledge that we do not yet have. We cannot even predict most of the problems that we shall encounter, or most of the opportunities to solve them, let alone the solutions and attempted solutions and how they will affect events. People in 1900 did not consider the internet or nuclear power unlikely: they did not conceive of them at all.
No good explanation can predict the outcome, or the probability of an outcome, of a phenomenon whose course is going to be significantly affected by the creation of new knowledge. This is a fundamental limitation on the reach of scientific prediction, and, when planning for the future, it is vital to come to terms with it.
[...]
Following Popper, I shall use the term prediction for conclusions about future events that follow from good explanations, and prophecy for anything that purports to know what is not yet knowable. Trying to know the unknowable leads inexorably to error and self-deception. Among other things, it creates a bias towards pessimism.
[...]
They all thought they were making sober predictions based on the best knowledge available to them. In reality they were all allowing themselves to be misled by the ineluctable fact of the human condition that we do not yet know what we have not yet discovered. Neither Malthus nor Rees intended to prophesy. They were warning that unless we solve certain problems in time, we are doomed. But that has always been true, and always will be. Problems are inevitable. As I said, many civilizations have fallen.
[...]
Political philosophy: "Who should rule?" is a bad question. A better one is: "how can improve our ability to correct errors?". How can we rid ourselves of bad rulers without violence? Build up around the assumption of error.
[...]
p.212 Whenever we try to improve things and fail, it is not because the spiteful (or unfathomably benevolent) gods are thwarting us or punishing us for trying, or because we have reached a limit on the capacity of reason to make improvements, or because it is best that we fail, but always because we did not know enough, in time.
[...]
Pessimism has been endemic in almost every society throughout history.
[...]
For example, the philosopher Roger Bacon (1214–94) is noted for rejecting dogma, advocating observation as a way of discovering the truth (albeit by ‘induction’), and making several scientific discoveries. He foresaw the invention of microscopes, telescopes, self-powered vehicles and flying machines – and that mathematics would be a key to future scientific discoveries. He was thus an optimist. But he was not part of any tradition of criticism, and so his optimism died with him.