From Discord.
Critical Rationalist:
I’m new to this app. Someone recommended that I come here. I am pursuing an masters degree in philosophy. My undergraduate degree was in psychology (concentration in applied theory and research). I would count myself as a Neo-Popperian (which should be unsurprising given my username). I look forward to tuning into the conversations you guys have.
curi:
What’s a Neo-Popperian?
Critical Rationalist:
Neo just means new or modified. It’s a shorthand way of saying “Popperian with some caveats”
Critical Rationalist:
Karl Popper influenced my epistemology more than any thinker, but I don’t think he was right about everything
curi:
What was he wrong about?
Critical Rationalist:
I think that the demarcation problem (insofar as it is a problem at all) is not best solved by a single criterion. Insofar as there is a correct definition of a term (like “science”), it’s definition will be cashed out in terms of family resemblance.
Critical Rationalist:
That’s probably my biggest disagreement with Popper. In Popperian fashion, I welcome criticism.
Critical Rationalist:
(I’m also happy to explain what I said with concrete examples)
Freeze:
What do you think of Popper's political philosophy?
JustinCEO:
@Critical Rationalist what do you think of Popper's critical preference idea
Critical Rationalist:
@JustinCEO @Freeze Very much on board with both his political philosophy and critical preference
curi:
Hi. Have you seen much of my stuff? I’m an Objectivist.
Critical Rationalist:
No, I haven’t
Critical Rationalist:
Do you have a blog or something?
Critical Rationalist:
(I know what objectivism is though)
curi:
Popper didn’t learn econ or give counter arguments but disagreed with free market minimal govt
curi:
How’d you find this server?
Critical Rationalist:
Someone recommended it to me
Critical Rationalist:
I met them at a party actually
curi:
https://elliottemple.com
curi:
Have you read Deutsch?
Critical Rationalist:
I see that you’ve talked with David Deutsch!
Critical Rationalist:
Yes! I love Deutsch.
Critical Rationalist:
He has never made explicit his ethical commitments, other than the fact that he is a) a realist, and b) not a utilitarian.
Critical Rationalist:
(Not in what I’ve read)
curi:
DD was an Ayn Rand fan and libertarian. He favors capitalism, individualism, minimal govt or anarchism. I got those ideas from him and his discussion community (which this is a continuation of, we had IRC back then) initially.
Critical Rationalist:
Well, no one is perfect.
curi:
What do you mean?
Critical Rationalist:
Sorry, that was a bad attempt at humour.
Misconceptions:
imagine this scenario. a bunch of kids are playing. 1 kid is mean to the others. so the other kids get away from him. the alone kid cries because he's now alone and he wants to play with the rest of the kids. the parent hears the crying of the alone kid and he learns about what happened. he doesn't hear about the part where that kid was being mean though. and the parent decides that the other kids have to include the alone kid. is this utilitarian ethics in action?
Critical Rationalist:
I have immense respect for DD. He was my introduction to Popperian thought. But I am not a Randian.
curi:
Is there a written criticism you think is good?
Critical Rationalist:
Of Randianism?
Critical Rationalist:
None that I’ve read
Misconceptions:
That action is not optimific. It leads to lower overall happiness, kid getting further bullied, and other kids not enjoying this company. Not utilitarian
curi:
of Objectivism. the term "Randianism" is disrespectful FYI.
Critical Rationalist:
Sorry I knew the term objectivism, but was unaware that Randianism was viewed as a pejorative
curi:
np
Misconceptions:
What is wrong with Randian? is Popperian bad too?
curi:
Rand didn't want her name used that way
curi:
Is there something you think would change my mind if I read it?
Critical Rationalist:
I’ve never read any criticism of Rand
Critical Rationalist:
I’ll go further
curi:
why disagree then?
Critical Rationalist:
I actually think egoism (a family of ethical theories of which objectivism is a species) is perfectly defensible
Critical Rationalist:
I think that actions which maximize your own welfare can be called genuinely good.
Critical Rationalist:
Actions which maximize the welfare of others (even when they conflict with your own) can also be called genuinely good
Critical Rationalist:
How do you decide between the two axioms when they conflict (egoism and utilitarianism)? Henry Sidgwick says that although they agree in most cases, there is no rational standard for deciding between them when they conflict.
Misconceptions:
Is your claim that one must not disagree with theories until one has criticism of it? @curi
curi:
Why else would one disagree?
Misconceptions:
There are infinite many theories, you agree with all of them?
curi:
no
Misconceptions:
Henry Sidgwick says
Why should we care what he says?
Critical Rationalist:
We shouldn’t
Misconceptions:
so why bring it up?
Critical Rationalist:
I’m giving credit to where I got this idea from.
curi:
Is there a conflict you have in mind?
Critical Rationalist:
Do I give money to life-saving charities. That’s one salient example.
curi:
Like cancer research?
curi:
Or like handing out fresh water in africa? or what?
Critical Rationalist:
Like the latter. The case I have in mind is the Against Malaria Foundation. They make bednets that save lives inexpensively.
Misconceptions:
@Critical Rationalist btw there's multiple Utilitarianism versions. Not all are about GHP.
Critical Rationalist:
Yes. Eg preference satisfaction
Critical Rationalist:
I’m defending the version that is a) most well known and b) the one I agree with
curi:
I think Africa's problems are political and that kind of charity is like pouring water into a leaky bucket. The real issues here are more about tyranny, which isn't a conflict between individual or group benefit, it's bad in both ways.
Misconceptions:
You'd think with that name you'd agree with Popper's version of utilitarianism.
curi:
@Misconceptions hi, how'd you find this server?
Critical Rationalist:
I’m not a sycophant. I agree with theorists when their arguments work. I think Popper got some things wrong. Any fallibilist should expect their heroes to get some things wrong.
Misconceptions:
Did I accuse you of being a sycophant?
Critical Rationalist:
Fair enough. My use of the term was not needed.
Critical Rationalist:
I just wanted to clarify that I am not a Popper devotee or something.
Misconceptions:
Hi, @curi Reddit.
curi:
where on reddit?
curi:
Popper made comments advocating TV censorship and a 51% share of all public companies being owned by the government. I think some of his beliefs contradict others so you couldn't agree with him about everything even if you wanted to.
Misconceptions:
Your post against Ollie's ANTIFA vid.
curi:
ah cool. which subreddit was it posted to? i didn't see.
GISTE:
@Critical Rationalist this line of discussion is still pending: curi said: "I think Africa's problems are political and that kind of charity is like pouring water into a leaky bucket. The real issues here are more about tyranny, which isn't a conflict between individual or group benefit, it's bad in both ways."
Misconceptions:
You didn't post it?
Critical Rationalist:
Oh sorry I was typing and forgot to finish
curi:
i don't recall posting it but possibly i did in the past.
Critical Rationalist:
@curi Yes that’s an interesting factual claim. It might turn out that giving to charities in Africa are on the whole counterproductive. But suppose it factually turned out to be the case that on balance, donating to African charities contributed more to their welfare and did NOT detract from their political progress. Philosophically, what would you say then?
curi:
i think you could help more people, a larger amount, by addressing the political problems, rather than donating to the victims who are being victimized on an ongoing basis (which is why they're so poor). and i think that can be done with mutual benefit – more civilized, productive countries to trade with.
Critical Rationalist:
Yes, and you could be right about that factual claim.
Misconceptions:
Dancing around the question tho
Critical Rationalist:
Do you think there are no cases in which self-interest and benefiting others come apart? It would be a miracle if that was true.
curi:
i don't think conflicts of interest exist in any cases. so if you want me to replace this hypothetical with a different one where i agree there's a conflict, i can't do it.
curi:
this is a standard (classical) liberal position which is also held by Objectivism
curi:
my comments re replacing were addressing to @Misconceptions comment about dancing.
Critical Rationalist:
I’m in a lab that is burning down. I’m dying of a disease x (I’m the only person who has it), and millions of people are dying from disease y. The lab has one room with the cure for disease x (last of its kind). The lab has another room with has the cure for disease y. I only have time to go into one room before the building burns down. Which room should I enter?
Misconceptions:
The point I think the KritRAT was making was that Donating your money in this hypothetical scenario does not further your selfish interests but it does help others. What do?
curi:
i also don't think it's necessarily sacrificial to donate to benefit others. if you value life and want to promote life, and combat mosquitos, i don't see anything wrong with that. i think it's a variety of shaping the world more to your liking.
GISTE:
hmm, i thought Misconceptions was talking to Critical Rationalist when he said the dancing comment
Critical Rationalist:
Ok, so what about the case I just described?
Misconceptions:
That sounds like a rejection of egoism. Value life = value other's lives.
curi:
the lab scenario is an emergency situation which is generally a bad way to understand how to live a good life in general in normal situations. i don't have strong opinions about it. i think an egoist can pick either room. you have to choose values to pursue in life. saving millions of people is a good accomplishment for a whole career. one can be happy with that.
Misconceptions:
That sounds like another tango my friend.
Critical Rationalist:
If you define egoism so broadly so as to include living in accordance with the values you hold, then it becomes empty. Choosing literally any set of values and acting upon them would count as egoistic so long as you hold the values.
Misconceptions:
I am curious about your real answer regarding the lab situation too mr @curi
Critical Rationalist:
By empty, I mean it is not an alternative to other ethical systems. It doesn’t add new content or help you decide in moral dilemmas.
curi:
i don't accept all values, but i do accept valuing human life – it's a wonderful thing.
Critical Rationalist:
It’s not clear to me then in what sense you’re an egoist
curi:
i'm describing Rand's position
Misconceptions:
ok what door would Rand take?
Critical Rationalist:
If I’m not mistaken, Rand thought that altruism was unethical
curi:
yes, as do i
Critical Rationalist:
At least, altruism for its own sake
Misconceptions:
So Rand and curi would take the self cure.
curi:
no
curi:
have you read Atlas Shrugged?
Critical Rationalist:
If the other cure is not altruistic, then nothing is
Misconceptions:
The Plot Thickens
Misconceptions:
my reading of AS is irrelevant to whether you would take x or y door my good man.
curi:
AS contains a relevant scene
Critical Rationalist:
What counts as altruistic according to you curi?
curi:
i guess you guys would consider John Galt an altruist
Critical Rationalist:
I haven’t read as, but I’m curious about your take on this dilemma
Misconceptions:
well it seems that if you do not take the self cure, you're sacrificing yourself for the benefit of others
Critical Rationalist:
Literally
Misconceptions:
and you said you would not take the self cure
curi:
if you want to understand the Objectivist way of thinking, this is a bad place to start.
Critical Rationalist:
Curi, you said self interest and benefiting others NEVER conflict
Critical Rationalist:
And I used this to show why that claim is false
Critical Rationalist:
It is very easy to imagine scenarios where they come apart
curi:
do you agree that i'm right about all non-emergency scenarios? we should start with easier cases before harder ones.
curi:
then you will see the main ideas of the theory.
Critical Rationalist:
There probably are cases in the real world where they come apart, but that’s an empirical question not a philosophical question
curi:
and learn something about how to apply them.
Misconceptions:
To be clear, you would not take the self cure right?
Misconceptions:
your position regarding where to start has been noted
curi:
so for example, a common alleged counter-example is two men apply for the same job, and there's just one spot. do you think that's a conflict of interest?
Misconceptions:
I'd like to conclude the lab scenario
Misconceptions:
before we move on
Critical Rationalist:
Curi, I think it is a sign of philosophical skill to be able to apply your philosophy to fresh moral dilemmas, not just to dilemmas that you have practiced dealing with
Misconceptions:
I agree my critical rodent friend.
curi:
i did give you an answer, but if you want to learn about Objectivism you're taking the wrong approach.
Critical Rationalist:
It’s unclear to me how your answer is consistent with egoism
Misconceptions:
curi how is sacrificing yourself to save the lives of others not altruism?
Critical Rationalist:
I think the egoistic answer has to be self cure
curi:
right, so let's talk about how this works in general before trying to apply it to an edge case.
Critical Rationalist:
Or else it is not egoism except in a trivial sense
Critical Rationalist:
Sure, give your explanation of the General case
curi:
so for example, a common alleged counter-example is two men apply for the same job, and there's just one spot. do you think that's a conflict of interest?
Critical Rationalist:
I’ll grant that there isn’t
curi:
why isn't there?
Misconceptions:
I would not have abandoned your lab scenario to a previously practice scenario so easily
Critical Rationalist:
I could concoct different explanations. eg I would rather live in a society where employers evaluate on merits
Critical Rationalist:
I agree, it is easier to give an account of why self interest and benefiting others converge in those cases
Critical Rationalist:
Misconceptions: I try to be charitable
Misconceptions:
Charity is evil!
Critical Rationalist:
I don’t play debate games, I’m interested in what the other person thinks
Misconceptions:
Get you some bootstraps
Critical Rationalist:
Especially someone who knows David Deutsch personally (that’s very cool byw)
Critical Rationalist:
*btw
curi:
yes, employers evaluating on merits is important. many benefits. and part of the mindset here is wanting good general policies rather than insisting on short term personal benefit in the immediate situation, regardless of overall consequences. right?
curi:
in the lab scenario, i don't see a clear principle (like evaluating job candidates on merit) that would be violated by either choice. yeah dying sucks but we don't have immortality yet anyway and it's a major accomplishment to pursue and helps shape reality more to my (non-arbitrary, i claim) preferences. on the other hand, nothing was specified in the example about me having any obligation to those people. like it isn't my job to save their cure. i don't have a contract making this part of my job duties. i don't know why all these people have allowed their lives to be dependent on this one lab without any backup copies of the info, but it seems unreasonable.
Critical Rationalist:
You say that your preferences for human life are non-arbitrary. Say a bit more about why they are non-arbitrary
curi:
i think promoting and contributing to a beginning of infinity and the growth of knowledge is good. also e.g. i value the kind of society which allows men to live peacefully, cooperate voluntarily, and control nature. is that enough or did you want a different type of info?
Critical Rationalist:
Yes that’s exactly what I want
Critical Rationalist:
Very good. So, you think all of those ends are good and worth pursuing. Furthermore, you think there are good and worth pursuing in a case when they conflict with self-interest. That’s not a problem! I just don’t think you’re really an egoist (but I don’t care much about the terms). You think it is empirically the case that in most cases self-interest and benefiting others converge on the same answer, but in the case where they don’t, you go for benefitting others
curi:
i didn't say what room i'd pick. and i think by your standards Rand isn't an egoist either. John Galt said he'd kill himself if they threatened Dagny's life (to pressure and control him). he didn't put his own life first no matter what.
Critical Rationalist:
Interesting.
Critical Rationalist:
So yes I don’t care what term we use. Rand would (according to that) not be an egoist in the traditional sense.
Critical Rationalist:
The fact that it is even a question for you problematizes your self-description as an egoist. Maybe you should define egoism
Critical Rationalist:
Ben
Critical Rationalist:
Brb
curi:
I guess you'd also think an egoist in the military must betray his country and comrades if he gets into a very dangerous situation where he thinks that'll (significantly? or even 1%?) improve his odds of personally living?
curi:
whereas i think you can sign up for the military. it's risky but it's an option. and if you do, you should follow general policies like your contract with your employer and your duties to your fellow soldiers to follow military strategy instead of getting them killed. If you don't want to risk your life, don't sign up. but if you do sign up and follow the basic rules you agreed to, it's possible to succeed and have a good life. it's not hopeless. it's a way to make a try for it. so it's ok if you don't have a better option.
GISTE:
i don't recall curi calling himself an egoist
curi:
Egoism is a term used by Objectivism. I consider it an overly fancy word but it's OK. The basic point is the self is very important and valuable, and pursuing self-interest is good. But the point is not to maximize years of life regardless of all other considerations like quality of life and the state of the world.
curi:
If that was the meaning, an egoist would have to get all his groceries delivered to reduce the risk of dying in a car accident.
curi:
I don't know anyone who advocates that. Certainly not Rand.
curi:
Egoism means e.g. that it's not my duty to sacrifice my preferences or values to other people's preferences or values. I should reject that. But it doesn't mean rejecting all values broader than my continued physical existence. An egoist is allowed to care about e.g. colonizing the stars and spend money towards that goal even if he doesn't expect to see it, and even though not spending that money on medical care lowers his life expectancy a little.
curi:
An egoist also may value his model trains above additional medical care.
GISTE:
so traditional egoism is nonsense like how the traditional selfishness concept is nonsense?
curi:
@GISTE take a look at info like https://plato.stanford.edu/entries/egoism/ and see if you can find it saying to maximize life expectancy over all other values
Critical Rationalist:
@curi thank you for the replies
Critical Rationalist:
I really should go to bed now, but I definitely have more to say
Critical Rationalist:
@curi I read through your comments again. If egoism (for you and Rand) only means that pursuing self-interest is good and worth doing, I’ll accept your definition
curi:
Did someone link squirrels yet?
Critical Rationalist:
But someone could still say “I value what’s in the Bible and want to follow it”
Critical Rationalist:
Egoism (in this broad sense) has nothing to say to such a person
JustinCEO:
http://curi.us/1169-morality
Critical Rationalist:
Was the Carlo Elliot dialogue a response to me?
JustinCEO:
It's squirrel thing curi mentioned, and is relevant to morality discussion
curi:
It’s DD and my view rejecting moral foundationalism. Mostly afk.
Critical Rationalist:
Conveniently for y’all I’m not a moral foundationalist
Critical Rationalist:
Does anyone have thoughts on Popper’s solution to the problem of induction? I think it is very compelling. His approach is to accept Hume’s conclusion that it is invalid to draw conclusions about the likelihood of events in the future based on observations of the past. He says that we instead have various competing theories which are criticized and (when applicable) tested. The theories which best survive our attempts at refutation, we tentatively accept (for the time being).
JustinCEO:
I dont think I really got crit of drawing conclusions on past data until someone explained that the reason we expect sun to rise is not cus we've seen it rise a bunch of times but because we have an explanatory model of sunrises. Change the model or some variables in it (cuz eg sun expanding in later stages of being a star or whatever) and your expectation of what will happen changes
Critical Rationalist:
Yes exactly
Critical Rationalist:
The model is what is held up to empirical tests and tentatively accepted in the absence of disconfirmarion.
curi:
re egosim, Objectivism is a system. i'm not very interested in terminology, but the overall ideas about how to think about morality, what sort of values are good and bad, what sorts of methods of achieving values are effective and ineffective, etc. when you look at the whole picture here, you find substantial disagreements with most people. the exact nature or starting point of those disagreements is hard to discover because most people don't organize their moral thinking much and don't want to go through the issues point by point (and if they do that, it often changes their view, which complicates finding out what they thought before).
curi:
re solution to induction, i think it's important to talk about how conjectures and refutations is an evolutionary process and evolution is the only known reasonable theory of how new knowledge is created. induction never actually offered a rival theory to evolution. also, although I think Popper's idea is good, and adequate to solve the problem of induction narrowly, i think it's missing some things. specifically the idea of best surviving attempts at refutation is vague and leaves people using a lot of intuition to fill in the gaps.
Critical Rationalist:
@curi I’m not interested in terms either, so that’s a fair response. Does objectivism give us a standard by which to decide between values that people hold? For example, if I (as a utilitarian) value maximizes happiness (everyone’s counts equally), does objectivism have anything to say to me? If so, what?
curi:
yes Objectivism has a lot to say about what values to hold. as does BoI, btw: don't hold values incompatible with error correction, don't hold values incompatible with unbounded progress.
Critical Rationalist:
Well... that sounds more Popperian than objectivist
curi:
i think you misread
Critical Rationalist:
But ok, I’m a utilitarian. I believe in error correction and unbounded progress.
Freeze:
i think objectivism might say don't hold values that sacrifice your preferences for others '
Freeze:
because they are counterproductive
Critical Rationalist:
Well, as a utilitarian I sacrifice my happiness for others, but since I want to do that, I suppose in a certain sense I’m not sacrificing my preferences.
Critical Rationalist:
Utilitarianism (and many other ethical systems) seem compatible with @curi’s standards
Freeze:
Jordan
jordancurve:
@Critical Rationalist Any comment as to your alleged misreading of curi's comment on values?
Critical Rationalist:
Where was that alleged?
jordancurve:
https://discordapp.com/channels/304082867384745994/304082867384745994/662830621898571806
Critical Rationalist:
sorry there’s a lot to keep track of
curi:
i'm going to be mostly AFK soon FYI
Critical Rationalist:
Ok sure, I’ll grant that.
Critical Rationalist:
I’ll grant his standards are objectivist
Critical Rationalist:
I maintain that they are compatible with many (maybe most) ethical theories
curi:
my 2 examples were from BoI not Oism
Freeze:
yeah
curi:
they are Oism-compatible though.
Critical Rationalist:
Right that’s what I thought
Critical Rationalist:
Boi=Deutsch
Critical Rationalist:
Anyways, breaks over
jordancurve:
@Critical Rationalist If that's what you thought, then why did you write "that sounds more Popperian than objectivist"?
Critical Rationalist:
I’ll see y’all later
Critical Rationalist:
I count Deutsch as a Popperian (as would he)
Freeze:
i think the misreading allegation had to do with you expecting them to be more oist when curi said them after the BoI part. were you more asking for objectivist values that aren't Popperian or Deutschian?
Critical Rationalist:
But yes Deutschian would have been more accurate
curi:
Oism says the way for individuals or society to get ahead is by the pursuit of individual self-interest in peaceful ways. this is how to help others. trying more directly to help others is broadly (not alway) counter productive and people shouldn't be guilted into it or told it's a moral ideal.
Freeze:
my disagreement isn't about your use of Popperian over Deutschian
jordancurve:
@Critical Rationalist curi said, paraphrased: Boi suggests these values. You replied, "that sounds more Popperian than objectivist". That still looks like a non-sequitur to me, most likely due to a misreading.
curi:
Oism rejects ideas like that the profit motive, or greed, are inherently anti-social or bad for anyone, and rejects the seeing the purpose of my life as being to help others instead of to help myself.
JustinCEO:
re: moral ideal, i think someone said earlier (mb @Critical Rationalist ? i'm not sure, correct me if wrong) that the strong form of altruism was rare. but even holding altruism as a moral ideal has a big effect on ppl's thinking
curi:
Oism broadly thinks each person should look out for himself and a few people who play a substantial, valuable role in his life (family, close friends), and take personal responsibility for getting good outcomes for himself, and people should cooperate especially via the economic division of labor and specialization, and also in other voluntary ways (like friendship) when they want to. this is not how most people see life.
JustinCEO:
even if ppl don't actually practice altruism consistently, it still has a (bad) effect on the world
curi:
Oism says e.g. that Bill Gates did more good for the world as microsoft founder/CEO than with his charity efforts afterwards.
Augustine:
Why is that?
curi:
when you trade for mutual benefit, it's hard to screw that up. both sides think they are benefitting. they can make mistakes but it's a good thing similar to solo actions that you think benefit you. and with business you have tools like profit and loss to help you judge what's effective and efficient. when you do charity you lose those mechanisms to help you get good outcomes. it's hard to know what's a good use of resources. it's hard to measure. the recipients can say "sure this is good for me" but it's hard to tell how good it is for them and compare it to alternative uses of resources. the free market system compares resource uses to alternatives and does optimization there.
curi:
and competition between charities for fundraising dollars are a different sort of thing (more marketing based for example) than competition by companies for customers.
Critical Rationalist:
@curi given your description of oism, I think it is an empirical claim not a philosophical one. It might be true (and likely is to a large extent) that self-interest produces more benefit than being altruistic. But that’s a claim for economists and sociologists to confirm or disconfirm.
Critical Rationalist:
I have to go again, but that would be my initial reaction
curi:
Economics is primarily a matter of logic and math, not empirical
Critical Rationalist:
There is behavioral economics, which is more empirical
curi:
That isn’t where Oism gets these ideas
Critical Rationalist:
To the extent that economics is insufficiently empirical, I would just amend my comment to say “it is for better economics to corroborate or disconfirm”
Freeze:
DD:
The whole concept of bias is a misconception. So-called 'biases' are just errors. Thinking is error correction—which biases are not immune to.
Hence patterns of errors in the outcomes of thinking are not explained by biases but by whatever is sabotaging error correction.
Freeze:
I also thought of behavioural economics when you mentioned sociology alongside economists
Freeze:
but I've been questioning that stuff lately
Freeze:
a lot of it seems based on ideas that contradict CR epistemology
Freeze:
in terms of knowledge and how it's created and the role ideas play in minds
Critical Rationalist:
Have to go again unfortunately. I’ll try to return tomorrow
curi:
Bye CR
Critical Rationalist:
@GISTE “do you agree with these 2 interpretations of your view? (1) a headache has inherent negative value and that it's automatically bad. (2) if i have a headache, and choose to not immediately take pain meds because i prefer to continue philosophy discussion for a few more minutes before taking pain meds, that is a sacrifice.”
Critical Rationalist:
Is this a true story?
Critical Rationalist:
But yes that is a sacrifice. If the pleasure derived from philosophy discussion outweighs the headache, then it would be prudent to make the sacrifice
curi:
Do you think all purchases are sacrifices because you give up money?
Critical Rationalist:
In a trivial sense, sure
Critical Rationalist:
But they are worthwhile sacrifices (sometimes)
curi:
In the same sense as what you just said re headache?
Critical Rationalist:
Yes exactly
Critical Rationalist:
Though the pleasure created could be in others or long term
curi:
I think it’s an error to view all action as sacrifice just because some hypothetical other scenario would be superior.
Critical Rationalist:
No I’m not using sacrifice in that sense
Critical Rationalist:
I would say sacrifice is giving up some good for an end
Critical Rationalist:
The end could be such that it makes the sacrifice worth it, or not
GISTE:
Is that ends justifies the means logic ?
Critical Rationalist:
Absolutely
curi:
All action involves giving up alternatives
Critical Rationalist:
What else could justify the means?
Critical Rationalist:
In other words, I don’t see how one can show that some means are bad unless they have tend to have bad consequences
Critical Rationalist:
People sometimes say “ends justify the means” to defend lying, violence etc
Critical Rationalist:
But those “means” are bad precisely because they have bad consequences
curi:
Busy soon btw.
Critical Rationalist:
No worries
curi:
Not caught up much but:
There are two different ways an idea can be empirical.
1) The idea was inspired by evidence. We used evidence to help develop the idea.
2) The idea makes claims about observable facts, so we could use evidence to test the idea.
The main ideas of economics, as I view it, are neither 1 nor 2. They are about logical and mathematical analysis of abstract, hypothetical situations. The starting point of economics isn't seeing what sort of economies worked well in the past and trying to optimize that. It's theoretical analysis of certain ideas and principles.
Economics is very hard to test because we can't do controlled experiments for most issues. Even if we could test, it's often not the best approach, as DD pointed out: https://curi.us/1504-the-most-important-improvement-to-popperian-philosophy-of-science
Some people try to make economics more empirical. For example, if they want to know about minimum wage, they look at cities, states or countries which created or changed a minimum wage and then look at the results (and sometimes they can find two similar places, and one creates a minimum wage, and one doesn't, and do a comparison). I reject this sort of empirical approach to economics in general. Not 100% useless but generally not much use.
If you want to understand minimum wage, you should consider concepts like supply and demand, and do mathematical calculations to see what they mean in some simplified scenarios.
And when rival economists disagree, the way to resolve this isn't by getting more data. A better approach is to figure out what's different about each of their systems and look for logical errors.
curi:
Applying economics to real world scenarios has various difficulties but can be done to a reasonable approximation. With minimum wage, after figuring out its consequences in a simple scenario, we can play with that scenario. Start adding extra complications and see what changes. E.g. increase or decrease the ratio of workers to employers and see if minimum wage has different results. Or you could add part time workers to your model, or add a simplified stock market, or whatever you think is relevant. That lets you learn about the connections between minimum wage and the other stuff you model.
You can also see how it's a form of price control and follows the general logic of price controls (price maximums causes shortages when low enough to matter; price minimums cause surplusses when high enough to matter – minimum wage causes a surplus of labor (unemployment) by preventing the price of labor from reaching the market clearing price). You can also understand why that is based on simple principles. The principles are things like what a trade is, what the division of labor is, what supply and demand are, what a buyer and a seller are, etc.
For complex real situations, we can see them as similar to an abstract concept – an inexact but pretty good fit – except for e.g. 8 extra factors that we identified as potentially important differences. Then we can consider the effects of each of the factors. And then we can often make some empirical predictions. But if we're wrong, while it can be an error in our economic logic, it's often an error somewhere else, like there was another factor in the real situation, which is important to the result, but which we didn't take into account.
curi:
same issue with Objectivist morality and self-interest. we get conclusions like that by thinking more like this https://elliottemple.com/essays/liberalism rather than by empirical observation.
curi:
8:16 AM] Critical Rationalist: If you all believe so much in the power (and easyness) of rational criticism, I would like to see someone defend @curi’s and @Freeze’s original claims which lead to this.
which claims by me?
GISTE:
@curi, maybe cr was talking about this https://discordapp.com/channels/304082867384745994/304082867384745994/663051081818963991
curi:
I just think as a practice it should be less common. I want @curi to rule out the following claim with philosophy: “everyone would be better off if they were altruistic”.
that claim is too vague to begin criticism.
curi:
it's ambiguous between: each individual would be better off if he did it himself, or everyone as a group would be better off if everyone did it
curi:
it doesn't specify what is and isn't altruistic behavior
curi:
and it doesn't specify what better off means
curi:
also i would expect to use economics in my response and i don't know which economics is accepted or denied.
curi:
like are we accepting the benefits of private property, division of labor, capitalism and trade? or not? if not, what is claimed instead?
curi:
which of the claims about those are errors and why?
curi:
if we accept that stuff, how does altruism interact with it? like are some trades altruistic? which ones?
curi:
@Critical Rationalist
curi:
also if i missed some major point to respond to, let me know, cuz i'm not gonna be reading everything. (this applies to everyone). if you really want my attention you can use curi or FI forums btw. i encourage ppl to do that but some seem to prefer discord without much explanation of why. http://fallibleideas.com/discussion
Critical Rationalist:
@curi
“1) The idea was inspired by evidence. We used evidence to help develop the idea.
2) The idea makes claims about observable facts, so we could use evidence to test the idea.
The main ideas of economics, as I view it, are neither 1 nor 2. They are about logical and mathematical analysis of abstract, hypothetical situations.”
Yes, but those logical models depend on assumptions about the world that are 2. The claim that humans are best approximated as rational self-interested utility maximizers is a claim economists could be wrong about. We might not have evolved to be like that. To the extent that that assumption (the rationality assumption) is violated, economic models will be less than perfect. Surely the point of economic models is to predict real economic behavior. Economic models are not toys for smart people to play with.
Critical Rationalist:
“Some people try to make economics more empirical. For example, if they want to know about minimum wage, they look at cities, states or countries which created or changed a minimum wage and then look at the results (and sometimes they can find two similar places, and one creates a minimum wage, and one doesn't, and do a comparison). I reject this sort of empirical approach to economics in general. Not 100% useless but generally not much use.”
Our disagreement might run deeper than I thought, because that is exactly the sort of economics I’m in favour of. I’m also in favour of abstract mathematical modeling, but if the modeling does not approximate real world exchange of goods, then it is useless. It is, as I said, a toy for smart people to play with. The proof of the pudding is in the eating.
Critical Rationalist:
“If you want to understand minimum wage, you should consider concepts like supply and demand, and do mathematical calculations to see what they mean in some simplified scenarios.”
There is nothing wrong with those concepts, but those mathematical calculations include empirical assumptions about human nature that could be false. If we evolved to NOT be rational self-interested utility maximizers, the equations will just be false (or at least, imperfect approximations).
“And when rival economists disagree, the way to resolve this isn't by getting more data. A better approach is to figure out what's different about each of their systems and look for logical errors.”
Or look for empirical assumptions that are false. Which of the following claims do you disagree with:
1. The goal of economics is to describe and predict actual economic interactions
2. Actual economic interactions are affected by human nature
3. Economic models make assumptions about human nature
4. Those assumptions could be false for accidental reasons about how we happened to evolve
“that claim (that a world of altruists would be better) is too vague to begin criticism.
it's ambiguous between: each individual would be better off if he did it himself, or everyone as a group would be better off if everyone did it”
You as an objectivist believe that it is the case that a world wherein people were altruistic would be a worse world. What did you have in mind when you asserted that? Tell me what you mean by altruistic, and we will select people who fit the description. No matter what description you give, it quickly becomes an empirical question whether people who fit that description interact better and produce more wealth.
curi:
Semi afk. Didn’t read @Critical Rationalist messages yet but I’m thinking we should narrow down the discussion and pick a specific point to focus on and try to reach agreement about. Make sense to you? Topic suggestion?
Critical Rationalist:
@curi I agree
Critical Rationalist:
I just had left a lot that I hadn’t responded to from 3 people so I just did a volley
Critical Rationalist:
If I had to narrow down what I see as my main disagreement with you, it would be this: the exact form that human nature has taken is a contingent fact of evolution. We are a certain way, and we could have easily been different if evolution had gone differently. Given that, we cannot know a priori what human nature is like (contingent facts have to be discovered through empirical testing). Your claims about which ethical systems will produce more wealth or welfare depend on assumptions about human nature. Therefore (given that assumptions about human nature cannot be know a priori, because they are contingent results of evolution), your claims about the effects of ethical systems cannot be known a priori.
Critical Rationalist:
Everything I said above would also apply to economic theories.
jordancurve:
@Critical Rationalist I don't know what you mean by "human nature". The closest meaningful term that comes to mind is "universal knowledge creator", but since you're familiar with Deutsch, I guess you would have used that instead if that's what you meant.
Critical Rationalist:
No. I mean things like how we respond to incentive structures, under what circumstances we will cooperate or not cooperate, what makes people respond tribalistically or not, whether people develop better under strict parenting or permissive parenting
Critical Rationalist:
Those are all relevant to what the impact of different ethical systems will be
jordancurve:
So... human nature = the way people think about various ideas in Western culture today?
jordancurve:
I don't think that's what you mean, but it's again my best guess at something coherent (to me) that roughly matches (maybe) what you're talking about.
Critical Rationalist:
What do you think I mean?
jordancurve:
idk, the closest match I have so far is "the way people think about various ideas in Western culture today"
Critical Rationalist:
If you think it’s incoherent, point out how
Critical Rationalist:
Did I mention or imply western culture?
jordancurve:
I don't undestand what you're talking about well enough to criticize it other than for being vague (to me)
Critical Rationalist:
I just listed some traits
jordancurve:
Does traits = ideas?
Critical Rationalist:
It is an open empirical question to what extent humans develop better under strict parenting, for example
Critical Rationalist:
Or... to what extent do we naturally feel empathy for suffering strangers
Critical Rationalist:
Some primates are fairly empathetic
Critical Rationalist:
Others are not
Critical Rationalist:
Which kind are we?
Critical Rationalist:
Open empirical question
Critical Rationalist:
I could give examples like this all day
Critical Rationalist:
And the answers to these questions really matter when we try to design societies
Critical Rationalist:
@curi these examples are relevant to our topic
curi:
i regard my main, important ideas about economics or parenting styles to apply to aliens too, not to be human-specific. do you disagree with that?
Critical Rationalist:
I stand by the idea that economic models can only be true to the extent that their assumptions about human nature are true (eg that humans or aliens are self-interested rational utility-maximizers). Whether or not those assumptions are true is an accidental fact of evolution. There is no law of nature that says humans or aliens must be a certain way. It depends what selection pressures we happened to face.
curi:
i think the relevant assumptions about human nature are very limited. e.g.: intelligence. made of matter. have preferences.
curi:
separate individuals
curi:
no magic
Critical Rationalist:
Well, even those are empirical claims (albeit ones that are so obvious that it is not worth challenging them)
curi:
i'm not saying 100% non-empirical
Critical Rationalist:
Good
curi:
tangentially i actually think the laws of logic, epistemology and computation are all due to the laws of physics, and so are technically empirical matteers.
Critical Rationalist:
Do you think the assumptions you listed are premises from which you can deduce logically (ie with no empirical social science data) that egoism works better than altruism in society?
Critical Rationalist:
And are you so confident in this deduction that no amount of empirical social science data could change your mind?
curi:
i probably left out a few premises and i use critical argument in general not strictly deduction, but basically yes.
Critical Rationalist:
Well, I’m afraid you’ll have to spell that out
curi:
big clashes with empirical data would result in me trying to figure out what's going on. lots of the sorts of studies people do today could not change my mind.
Critical Rationalist:
Explain to me the transition from those assumptions to egoism works better
curi:
or i should say, not with the sort of results they actually get. i guess if a minimum wage study found wages went up a trillion times in a city (after inflation adjustments) i'd start investigating wtf happened there.
Critical Rationalist:
Do you mean a trillion fold or a trillion times in a row?
curi:
fold
Critical Rationalist:
Your critical argument is so powerful that you need a trillionfold increase in wages to even consider that your argument is wrong?
curi:
that was an example not a minimum
Critical Rationalist:
What would the minimum be
Critical Rationalist:
Ballpark
Critical Rationalist:
Although frankly
Critical Rationalist:
To me
Critical Rationalist:
What matters more is not the size of the increase
curi:
varies heavily by context. just if something really unusual happened, which does not appear to be explainable by any of the typical factors, i'd be curious what caused it.
Critical Rationalist:
But the number of replications
Critical Rationalist:
If dozens of different natural experiments were done (ie neighbouring states or provinces with minimum wage increases) and all of them found a particular result, that would count more than one natural experiment with a huge effect size
curi:
if they all got 10% wage increases it'd mean nothing to me
curi:
but a trillion percent increase is very hard to explain by any explanations i already know of
Critical Rationalist:
But if it is just one natural experiment
Critical Rationalist:
It could be so many other factors
Critical Rationalist:
Replications are (rightly) much more impressive to social scientists than single studies with big effects
Critical Rationalist:
It is easy to get big effects by chance with a single study
curi:
you're speaking general rules of thumb. i'm not debating that.
Critical Rationalist:
It is much harder to get small effects that replicate really well (and btw, 10% wage increase is huge)
curi:
i understand what you're saying
Critical Rationalist:
Ok, I want you to spell out this critical argument
Critical Rationalist:
Because... you’re hypothetically willing to discount dozens of replicated natural experiments on the basis of this argument
Critical Rationalist:
It better be airtight
curi:
do you have an opinion of minimum wage laws? do you know much econ? is it a good topic to use? may afk any time btw
Critical Rationalist:
Well, I guess I originally had in mind the argument that egoism makes society better
curi:
my arguments re egoism involve econ, that isn't a separate topic
Critical Rationalist:
I figured they’d be related
Critical Rationalist:
Well, I would like to see it spelled out
Critical Rationalist:
I suspect I’ll be able to follow without a technical understanding of Econ
curi:
ok. just to know where to start, what is your current view on min wage?
curi:
yeah my econ arguments aren't especially technical
Critical Rationalist:
Oh I’m very open minded about this
Critical Rationalist:
There are some natural experiments of the sort I’m describing that indicate min wage increases employment
Critical Rationalist:
But they are few in number
Critical Rationalist:
I accept that the models generally predict the opposite
Critical Rationalist:
I’m not here to defend any particular view of economics
curi:
ok
Critical Rationalist:
I’m not even attacking the idea that egoism harms society
Freeze:
around here was a minimum wage discussion between Andy and curi that was interesting: http://curi.us/2145-open-discussion-economics#10988
Critical Rationalist:
I’m attacking the idea that the claim that “egoism helps society” can be known a priori
curi:
to be clear, my claim: not strictly a priori, but approximately. we don't need to do empirical studies about it, and it doesn't depend on parochial details like that our planet has oil or trees on it.
Critical Rationalist:
Not those parochial details
Critical Rationalist:
But details about the kind of creatures humans are
Critical Rationalist:
How empathetic are we
Critical Rationalist:
How rational are we
Critical Rationalist:
Do we engage in systematic errors of reasoning
Critical Rationalist:
How selfish are we under normal conditions
Critical Rationalist:
(not “how selfish should we be for optimal results”)
Critical Rationalist:
We are primates. The product of an unguided process. It really matters what kind of creatures we are.
curi:
yeah, my arguments don't use claims about those things are premises in the usual sense. however, i do have some claims about the irrelevance of standard claims along those lines.
jordancurve:
I regard people's degree of empathy and rationality as a product of the ideas they hold, not as some kind immutable property of humans.
jordancurve:
Contra "the kind of creatures humans are"
curi:
yeah that. it's part of the universal knowledge creator view of BoI.
Critical Rationalist:
The extent to which empathy is caused by their ideas is a question of psychology and neuroscience
Critical Rationalist:
In fact, I think there is good reason to think that most of our responses are the result of automatic unconscious processing
Critical Rationalist:
But even if you don’t agree with that, how can you rule it out? It is certainly possible that unconscious automatic processing (NOT ideas) leads to empathy. How can you rule that possibility out?
Critical Rationalist:
How can you rule out that empathy is not in the non-idea part of unconscious processing
curi:
This internet is cutting out. The quick outline is you do epistemology first and then use that to evaluate models [of] minds. I’ll give some details but not today.
Critical Rationalist:
I definitely want that spelled out when curi comes back
Critical Rationalist:
Our minds could have evolved many different ways
Critical Rationalist:
Evolution is a contingent process, with lots of random events and shifting selection pressures
Critical Rationalist:
There is no way to sit on your armchair and figure out how evolution happened
Critical Rationalist:
And our minds are products of evolution
jordancurve:
Empathy involves understanding other people. If our ability to empathize were limited by non-universal hardware (which I take to follow from the hypothetical that empathy is part of "the non-idea part of unconscious processing"), then there could exist situations in which it would be impossible for us to understand the other person enough to empathize with them. This would contradict the unbounded reach of human understanding that is argued for in The Beginning of Infinity. Therefore our ability to empathize is not controlled by non-universal hardware.
jordancurve:
Or at least, the final sentence follows unless there's some other objection I didn't think of, which is quite possible. 🙂
Critical Rationalist:
Ok, maybe there are some situations in which our current empathetic capacities (which we’ll suppose are constituted of non-universal hardware) cannot empathize with others
Critical Rationalist:
But maybe our rational capacities do have the unbounded character Deutsch speaks of. I’m willing to grant that
Critical Rationalist:
But I see no contradiction between supposing that a) empathy is non-universal and b) rationality is unbounded
curi:
Do you think you understand and agree with what BoI says about universality and jump to universality?
jordancurve:
To the extent that empathy is a matter of ideas, any hard-wired limitation on human empathy contradicts the universality of human thought argued for in BoI. @Critical Rationalist
Critical Rationalist:
The claim that empathy is a matter of ideas is precisely what I’m challenging
Critical Rationalist:
I have not read Boi in its entirety. The universality chapter was one of the ones I skimmed
jordancurve:
If you're looking for things to argue with or learn about, curi has collected a list of unrefuted and potentially controversial ideas here: http://curi.us/2238-potential-debate-topics
Critical Rationalist:
@jordancurve I went through that page and identified around 50 claims. I disagree with around 35 of them (quite strongly in most cases)
curi:
Most of these things don’t have an explicit Popper view, have to apply Cr principles
Critical Rationalist:
@curi If you know which claims on your list are DDs views, I’d be interested in knowing
Critical Rationalist:
These are my core commitments, and the thinkers who influenced me:
Critical Rationalist:
Critical rationalism* (epistemology): Karl Popper, David Deutsch, Alex Rosenberg (helpful critic)
Utilitarianism, moral realism* (ethics): Henry Sidgwick, Joshua Greene, Peter Singer
Metaphysical naturalism (metaphysics): Sean Carroll, Dan Dennett, Alex Rosenberg
Social democracy, centre-leftism (politics): Karl Popper, Noam Chomsky, Thomas Sowell (helpful critic)
Compatibilism (free will): Dan Dennett, David Hume, Giulio Tononi
Panprotopsychism (consciousness): David Chalmers, Christopher Koch, Giulio Tononi
Evolutionary psychology* (human nature): David Buss, Steven Pinker, David Buller (helpful critic)
* with caveats
curi:
Most are. Is there a particular thing you’re curious about?
Critical Rationalist:
Trump, romance, global warming
curi:
No, yes, yes
Critical Rationalist:
Global warming... are you sure about that?
curi:
Yes
Critical Rationalist:
Because I seem to remember hearing him say in a ted talk that the right response is to trust the experts
Critical Rationalist:
In this context
curi:
He was trying to be diplomatic and choose words very exactly to not literally lie
Critical Rationalist:
Has DD ever been married?
curi:
I don’t discuss my personal life let alone his
Critical Rationalist:
Haha fair enough
Critical Rationalist:
Anyways, there is obviously lots to talk about
Critical Rationalist:
I will probably have to push away in a week or so when my next semester starts
Critical Rationalist:
But this will be a looming temptation
curi:
Re romance there was an Autonomy Respecting Relationships forum
Critical Rationalist:
Next semester I’m working on finishing my MA in philosophy, but I’ll also be volunteering as a research assistant for that horrid discipline of psychology
Critical Rationalist:
😉
curi:
DD supported Bush but has been gradually shifting more politically left
Critical Rationalist:
I think Popper is left wing to a first approximation
curi:
Yeah but not far left like Hillary, Bernie, SJWs
Critical Rationalist:
Hillary is left in your book?
curi:
Yes!?
Critical Rationalist:
*far left??
curi:
Yes she is an Alinskyite who called a hundred million Americans deplorables
Critical Rationalist:
She’s centrist even by the standards of the Democratic Party
Critical Rationalist:
And by international standards, the democrats themselves are quite centrist
Critical Rationalist:
Bernie, Warren, the squad, they are squarely in the left
Critical Rationalist:
But they’re a minority in the dems
Critical Rationalist:
In terms of Hillary’s concrete policy proposals, she’s quite centrist
curi:
I don’t agree
Critical Rationalist:
Foreign policy she has a long history of being hawkish (arguably Center right)
Critical Rationalist:
Calling 100 million Americans deplorables is elitist and dismissive, but not leftist
curi:
She did it because she’s far enough left of them to hate them
Critical Rationalist:
How do you know that’s why she said it?
Critical Rationalist:
Btw just to be clear I’m not a Hillary fan
Critical Rationalist:
I’m just a little surprised
curi:
I have read a lot of political info that you probably haven’t
curi:
Leads to perspective differences
Critical Rationalist:
That’s... not a good way to engage in conversation
curi:
? It shouldn’t be surprising to reach significantly different conclusions based on different info
Critical Rationalist:
I might have been reading more into that comment than was there
curi:
Just on phone not giving details. Around more tomorrow probably
curi:
Almost done traveling
Critical Rationalist:
Ok @curi, here is one issue from the list of debate topics: genes have no direct influence over our intelligence or personalities. That is a empirical conjecture. As Popperians, what do we do when we make empirical conjectures? We try to test them. If genes had no influence over those traits, then people who share all of their genes but none of their environment should not be similar. Identical twins raised in separate adoptive families fit this description. They are in fact massively similar. In terms of IQ scores and personality tests (which I am sure you’re skeptical of), but also behavioral measures: how much education they get, income, even political values (yes, really). Just go to google scholar and look up “heritability estimate twin studies” and then any trait. These heritability estimates are derived from the kind of twin and adoption studies that I’m describing.
Critical Rationalist:
To make this concrete, suppose John and Bob are identical twins raised in separate families. They would be similar in terms of cognitive ability (as measured by IQ tests), political beliefs (though of course not 100% identical), and measurable behaviors. Get massive samples of “Johns and Bobs”, and you find similarities like this replicate well. What is your explanation for this?
curi:
While I have some empirical comments on that issue (e.g. re low data quality), I think the important issues are primarily theoretical. We need a complex theoretical framework with which to interpret the data. We need models of how genes and minds work, explanations of causal mechanisms, rival ideas, criticism, etc. Popper says observation is theory laden, and fairly often there is a lot of theory involved, a lot of background knowledge that makes some difference.
curi:
So e.g. I think the theory points in http://bactra.org/weblog/520.html are important to interpreting the data correctly. They explain e.g. what "heritability" is. One needs an understanding of that to know what to make of the data. They also explain in general some limitations of correlations and statistics.
Critical Rationalist:
Well, on the data quality issue, the findings of behavioral genetics are VERY well-replicated. See https://journals.sagepub.com/doi/pdf/10.1177/1745691615617439
Critical Rationalist:
I've taught AP Psychology (which contains a chapter on heritability and individual differences) several times, and took psychology statistics courses during my undergrad. Heritability has a precise meaning: the percentage of population variance in a trait that is caused by genetic differences. For example, people in the population differ on height (i.e. height is variable). What percentage of this variability is due to genetic differences? Around 90%. That means 90% of the differences between people are due to genes. We can estimate this with twin and adoption studies.
curi:
the percentage of population variance in a trait that is caused by genetic differences.
that is not the meaning.
Critical Rationalist:
So, if your article has a different account of heritability than the one I've described, I can say with some confidence that it is at odds with contemporary behavioral genetics. I've read summaries of the literature from Eric Turkheimer, Steven Pinker, and the article above (which was written by 4 leading experts; they summarized dozens of studes).
Critical Rationalist:
Oh it isn't? Please give me the definition.
curi:
the article is by an geneticist expert FYI
curi:
To summarize: Heritability is a technical measure of how much of the variance in a quantitative trait (such as IQ) is associated with genetic differences, in a population with a certain distribution of genotypes and environments. Under some very strong simplifying assumptions, quantitative geneticists use it to calculate the changes to be expected from artificial or natural selection in a statistically steady environment. It says nothing about how much the over-all level of the trait is under genetic control, and it says nothing about how much the trait can change under environmental interventions. If, despite this, one does want to find out the heritability of IQ for some human population, the fact that the simplifying assumptions I mentioned are clearly false in this case means that existing estimates are unreliable, and probably too high, maybe much too high.
curi:
the term "associated with" is not, and does not mean, caused by
JustinCEO:
"associated with" is more like "correlated with" if my understanding is correct
curi:
i've read mostly looked at the actual literature instead of summaries of the literature FYI. i think this is a better method.
Critical Rationalist:
I actually agree with that definition. But the best explanation of the pattern of associations is that the genes are playing a causal role. This is not just my view. Here is a quote from an article from Nature (written by leading experts): "IQ heritability, the portion of a population's IQ variability attributable to the effects of genes"
Critical Rationalist:
https://www.nature.com/articles/41319
Critical Rationalist:
But yes, the data are logically compatible with other causal explanations.
GISTE:
i guess most people (including most "experts") agree with you, but that doesn't mean that's the right position. @Critical Rationalist
curi:
your ideas about what is a good explanation in a particular case are not a matter of heritability. they are something else.
Critical Rationalist:
@curi What is your rival theory for why identical twins raised apart are similar on every trait we can measure.
Critical Rationalist:
@GISTE He was citing the definition given by a geneticist. I agree with everything in the definition, but it was incomplete.
Critical Rationalist:
Or at least, it is compatible with a causal explanation. The causal explanation is the theory which (I would contend) best survives theoretical criticism.
Critical Rationalist:
If someone disagrees, they better offer a better theory.
GISTE:
@Critical Rationalist i was referencing this: "But the best explanation of the pattern of associations is that the genes are playing a causal role. This is not just my view."
Critical Rationalist:
It is true that that explanation is not just my view, but I am willing to defend it on its own terms.
Critical Rationalist:
The fact that experts believe it does not make it true.
Critical Rationalist:
Here is my explanation for why identical twins raised apart are similar for psychological traits: genes influence them. Does someone have an alternative explanation?
curi:
i don't agree with your take on the dataset, but setting that aside the basic explanation is gene-environment interactions, e.g. a gene for height can be correlated with basketball skill but it doesn't provide basketball skill, that isn't the kind of thing it does.
betterbylearning:
@Critical Rationalist I find it easiest to think about this matter of genetic causes by way of example. Suppose our culture regards red-haired people as volatile, easily angered and less rational personalities. And so when people, generally, encounter a red-haired child they treat him or her differently from other children. They try to explain stuff less and invoke violence / control over redheads more quickly. So then someone comes along and does a twin study. They find that, in fact, genes are associated with adults who are less rational and more prone to violence. But it could be (likely is) that genes cause red hair, red hair causes cultural mistreatment, and cultural mistreatment causes less rationality and more violence. Not that genes directly cause less rationality and more violence. If the culture changed, the result would change without the genes changing at all.
Critical Rationalist:
"i don't agree with your take on the dataset" Please be more specific. Are you denying that identical twins raised apart have similar IQs, similar personalities, etc.?
curi:
i think you're overstating that.
Critical Rationalist:
They much more similar than strangers, but not 100% the same.
Critical Rationalist:
I can give you precise numbers if you want.
Critical Rationalist:
I'm still waiting for an alternative explanation.
curi:
the basic explanation is gene-environment interactions, e.g. a gene for height can be correlated with basketball skill but it doesn't provide basketball skill, that isn't the kind of thing it does.
betterbylearning:
@Critical Rationalist I intended to suggest an alternative explanation via my example. There could be some trait genes cause, which people culturally decide means they should treat people differently. The different treatment then causes outcomes like IQ (or basketball skill).
curi:
DD's example is an infant smiling gene, which causes infants to smile more and does nothing later. This could end up associated with all sorts of stuff because it leads to different treatment by parents in our culture.
Critical Rationalist:
Yes, these would all (for me) count as ways that the genes can cause human differences.
curi:
a gene which causes infant smiling is quite different than a gene which causes intelligence, right?
Critical Rationalist:
Ok, so now you want to have a specific empirical discussion of HOW genes cause intelligence.
Critical Rationalist:
Maybe they do so by structuring the brain differently
Critical Rationalist:
Maybe they cause more height
curi:
no, i don't want to discuss empirical matters, i want to discuss how to view a simplified example
betterbylearning:
I think it comes down to what problem you're trying to solve with the "genes cause" explanation.
Critical Rationalist:
maybe they cause smiling (which in turn causes more attention)
curi:
suppose by premise it's the smiling thing. that is very different than a brain structure gene right? worth knowing the difference? worth making statements which differentiate the two cases?
Critical Rationalist:
So my original question was what your explanation was for the fact that identical twins raised apart are similar in terms of personality and IQ scores
Critical Rationalist:
and your response is: it is possible that genes cause this difference by making children smile more
Critical Rationalist:
I agree
curi:
ok so have the twin studies differentiated between these two scenarios?
betterbylearning:
If you're trying to enumerate all causes, including indirect ones, then I don't have an objection to including genes. But if you're trying to figure out what you'd have to change to get greater IQ, genes don't make that list, culture does.
Critical Rationalist:
the twin studies do not establish HOW genes cause intelligence
Critical Rationalist:
to be clear, you both have only established that one possible way that genes cause intelligence is through eliciting cultural responses
curi:
if you agree the twin studies might be about infant smiling genes, and that one should be careful not to make statements talking about genetic intelligence when genetic infant smiling is the actual thing, then you should not make statements that studies have shown genetic intelligence. right?
curi:
they'd just be inconclusive
Critical Rationalist:
they are inconclusive about the exact mechanism by which genes have their effects yes
Critical Rationalist:
but
Critical Rationalist:
your page said something to the effect of genes do not influence intelligence
Critical Rationalist:
you claim to know that this is true
Critical Rationalist:
not that "it is possible that genes have their influence indirectly"
curi:
yes, so there are multiple issues involved with that
JustinCEO:
Does a study consistent with very different causal mechanisms tell us anything more than that a correlation exists?
curi:
one is: some people think twin studies refute my position. you brought that up. they do not. they are compatible with it.
curi:
another is my actual reasoning
Critical Rationalist:
you believe (correct me if I'm wrong) "genes do NOT directly influence intelligence"
curi:
my comments re twin studies were just trying to defend my view from refutation, not tell you the positive reasons for it
curi:
do you agree that i've succeeded at this limited goal?
Critical Rationalist:
Yes, actually I would agree that your view is not logically incompatible with the results of twin studies.
curi:
ok great
Critical Rationalist:
The DD example is a possible explanation of the twin studies findings which would be such that the genes have an indirect effect on intelligence
Critical Rationalist:
So, how do you rule out the possibility of direct influence?
Critical Rationalist:
The quote from the website is this: "Genes (or other biology) don’t have any direct influence over our intelligence or personality."
curi:
to understand what ways genes may effect intelligence, one needs a model of how minds work and an epistemology.
Critical Rationalist:
Make your case
curi:
for example, if we model minds as buckets, then we could imagine (without knowing all the details, that's ok) that there is a gene which causes a brain to be a larger sized bucket which lets more knowledge be poured into it total.
curi:
similarly there could be genes that make the entrance to the bucket wider or narrower, allowing knowledge to be poured in at a higher or lower rate.
Critical Rationalist:
Sure, I'm willing to discard the bucket model
Critical Rationalist:
I've read Objective Knowledge (which you seem to be alluding to)
curi:
in this model, it's fairly easy to propose genetic mechanisms. however the model has problems.
Critical Rationalist:
Ok, so we've ruled out the bucket model
Critical Rationalist:
go on
curi:
my model says that brains are universal classical computers. they're Turing-complete. this highly limits the relevance of hardware differences. minds are a type of software. basically we get an operating system pre-loaded which grants us intelligence (the ability to conjecture and refute) and then we develop our own apps/ideas during our life. intelligence differences, in the sense of thinking quality differences, are due to better or worse ideas.
Critical Rationalist:
"brains are universal classical computers. they're Turing-complete."
Critical Rationalist:
And you established this without the smallest amount of neuroscience data, right?
Critical Rationalist:
You're going to have to spell out how you know that brains are universal classical computers
curi:
i wouldn't say zero. but not much.
Critical Rationalist:
And also, I assume you mean that only human brains are like this. Chimpanzee brains are not classical computers, right?
curi:
do you know what a universal classical computer is? They are covered in FoR. not sure if you've read that.
curi:
no, chimpanzee brains are also universal classical computers.
Critical Rationalist:
I've read maybe half of it
Critical Rationalist:
A turing machine? Capable of computing anything that can be computed
curi:
yes
Critical Rationalist:
but classical as in non-quantum (only 0s and 1s)
Critical Rationalist:
Interesting, how do you know that human brains are classical computers
curi:
do you mean classical as opposed to quantum?
Critical Rationalist:
no classical as opposed to whatever chimpanzee brains are doing
curi:
i said chimp brains are also classical
Critical Rationalist:
oh sorry I misread that
curi:
so are PCs and iphones
Critical Rationalist:
yes yes those definitely are
Critical Rationalist:
now... you also think chimpanzees are less intelligent than humans...
curi:
i don't think chimps are intelligent at all
Critical Rationalist:
so it is possible for brains (which are classical computers) to differ in their intellectual capacity, yes?
curi:
it's important to differentiate differences due to software from differences due to hardware
Critical Rationalist:
well, I think there are several more steps you must go through before you can rule out that genes directly influence intelligence
curi:
sure, i gave an outline
Critical Rationalist:
where?
curi:
my model says that brains are universal classical computers. they're Turing-complete. this highly limits the relevance of hardware differences. minds are a type of software. basically we get an operating system pre-loaded which grants us intelligence (the ability to conjecture and refute) and then we develop our own apps/ideas during our life. intelligence differences, in the sense of thinking quality differences, are due to better or worse ideas.
Critical Rationalist:
"are universal classical computers. they're Turing-complete. this highly limits the relevance of hardware differences."
Critical Rationalist:
but wait... chimpanzees also have universal classical computers which are turing-complete
Critical Rationalist:
are the number of hardware differences (that are relevant to intelligence) between humans and chimpanzees "highly limited"?
curi:
yes
Critical Rationalist:
so if a chimpanzee was raised with the same software as a human, it could be as intelligent?
curi:
not all software comes from parenting
Critical Rationalist:
by the way
Critical Rationalist:
this isn't limited to chimpanzees I assume
curi:
right
Critical Rationalist:
but I won't even go there
Critical Rationalist:
you think if a chimpanzee was raised in the same parenting (and wider social) context, it would be as intelligent as a human?
curi:
no
Critical Rationalist:
what other sources of software are there?
Critical Rationalist:
in your view
curi:
genes do something roughly like an operating system install disk does
Critical Rationalist:
ok so genes can influence software?
curi:
initially
Critical Rationalist:
and install software that makes an organism more intelligent, initially?
curi:
if you drop the "more" then yes
Critical Rationalist:
so you know that human genes produce the exact same intelligence software in each human
Critical Rationalist:
how do you know that?
curi:
no
Critical Rationalist:
so... do you think that human genes produce different intelligence software in different humans?
curi:
so, genes do not produce the exact same hardware brains in each person, but small variations in hardware, such as having 1% more neurons, have only limited importance. they don't change certain key issues like being a universal computer or not. (setting aside cases of major brain damage and people who can't hold conversations, learn math, etc.)
variations in intelligence software don't matter much either for the same basic reason: the important issue is whether a universality is present or not present. for the software, either it is or isn't a universal knowledge creator.
Critical Rationalist:
does the software in chimpanzee classical computer brains have universality? I'm inferring "no"
curi:
it doesn't have universal knowledge creation. (there are different types of universality)
curi:
in my view, the term intelligence has two separate meanings. one is binary: intelligent or not. this refers to universal knowledge creation or not. the second is a matter of degree, and relates to thinking quality. this is the kind of difference we see between people healthy people, and is due to different knowledge especially methodology stuff.
Critical Rationalist:
I'm skeptical of your account of the human mind, but I'll grant it and see if what you're saying follows or not
curi:
ok
Critical Rationalist:
Here is an empirical possibility that seems compatible with your accont
curi:
btw i may afk soon but will continue later
Critical Rationalist:
Well, actually, multiple possibilities
Critical Rationalist:
The software could come prepackaged with ideas already in place. It could come into place with certain ideas encoded unconsciously (and thus, inaccessible to deliberative reflection and change). If the latter is true, then the ideas IN PRINCIPLE could be changed (with technology) but not with pure thought. Absent dramatic changes in technology, if that were true, some people would be more limited if bad ideas were encoded into the unconscious by our genes
Critical Rationalist:
Let's start with that possibility
curi:
what sort of limit? would this limit limit the repertoire of knowledge they could create, or not?
Critical Rationalist:
suppose empathy turns out to be harmful
Critical Rationalist:
but suppose its effect on conscious thinking is unidirectional
Critical Rationalist:
empathy affects our conscious thinking, but not the other way around
Critical Rationalist:
but the underpinnings of empathy are unconscious, and determined by our genes
Critical Rationalist:
suppose it prevents certain people from becoming objectivists
curi:
objectivism is a type of knowledge. so you're talking about a person who is not a universal knowledge creator, right?
Critical Rationalist:
their unconsciously caused empathy overrides their conscious thinking or at least strongly influenced it
Critical Rationalist:
they in principle could be
Critical Rationalist:
their linguistic capacities are capable of conjecturing objectivism and criticizing it
Critical Rationalist:
but they refuse to accept it, because their empathy overrides it
Critical Rationalist:
(empathy being, ex hypothesi, somethign unconsciously caused and built by genes)
Critical Rationalist:
This is obviously very hypothetical, but this is the kind of thing you need to rule out
curi:
this empathy is an extra, unnecessary complication tacked onto a simpler model, and without clear details about where it fits into the conjecture and refutation model.
Critical Rationalist:
but it is a possibility
Critical Rationalist:
we could have been selected to have this empathy
curi:
i don't think one can see whether it's a possibility without clarifying the thing being claimed.
Critical Rationalist:
whenever we think of people who are suffering
curi:
but in any case it's a possibility that we're all puppets of advanced aliens, living in a simulation, etc., etc.
curi:
that sort of possibility is the wrong way to make judgments about what to tentatively, fallibly believe
Critical Rationalist:
we have a software program that says "be concerned about this for its own sake"
Critical Rationalist:
and it overrides the outputs of conscious deliberative thinking
Critical Rationalist:
but it itself is outside the reach of deliberative thinking
Critical Rationalist:
there is nothing contradictory about this hypothesis
Critical Rationalist:
but your theory (seems to) require that it is false
curi:
people aren't born knowing what suffering is conceptually and how to recognize it in other people, so how could an preloaded software deal with it? that's similar to proposing preloaded software for doing calculus even though we aren't born knowing arithmetic or algebra.
Critical Rationalist:
"people aren't born knowing what suffering is conceptually and how to recognize it in other people"
Critical Rationalist:
how do you know that?
curi:
do you think they are?
curi:
i conjectured they aren't and considered the matter, and alternatives, critically.
curi:
i didn't seek an airtight proof, i used CR methods.
Critical Rationalist:
the preloaded empathy software program could be one that is ready to develop as soon as the organism develops the concept of suffering
Critical Rationalist:
you said earlier that the preloaded software admits of individual differences
Critical Rationalist:
as long as it is possible that some of those individual differences are realized as unconscious programs (which are not amenable to being changed with reflection), then it is possible that those individual differences are consequential
Critical Rationalist:
(consequential by your standards)
curi:
busy
curi:
what does software being ready to develop mean? develop in what ways by what means?
curi:
and what, if anything, prevents a person from simply not running this software?
Critical Rationalist:
However you think the universal knowledge creation software develops in brains, this software develops the same way
Critical Rationalist:
What prevents the person from not running the software is that it is inaccessible to conscious reflection
curi:
but i don't think that develops. more like it's there, fully formed, when the computer is first turned on.
JustinCEO:
kinda like a BIOS?
Critical Rationalist:
Does a zygote have the universal knowledge creation software?
Critical Rationalist:
Obviously not
curi:
your conception of conscious reflection is not specified in terms of the things in this model. i think it's a higher level issue.
Critical Rationalist:
Do adults have it? Yes
Critical Rationalist:
Somewhere in the middle it develops
curi:
if you're talking about development in terms of e.g. creating and attaching proteins that form the brain, then do you think people's brains grow at age 10, or whatever, re empathy?
Critical Rationalist:
Yes that’s an empirical possibility that you haven’t ruled out
curi:
do you believe that?
Critical Rationalist:
But no, I was just responding to your assertion that the universal knowledge creation software doesn’t develop
Critical Rationalist:
Which... of course it has to develop
Critical Rationalist:
Somewhere between zygotehood and adulthood
curi:
do you think macos develops at some point in the imac factory?
Critical Rationalist:
Yes they are built
curi:
what is "they"?
Critical Rationalist:
You mean macs right?
curi:
no i said macos
JustinCEO:
macOS, mac Operating System
Critical Rationalist:
Oh sorry
Critical Rationalist:
I think I have a way to make this more concrete (in terms of your system)
Critical Rationalist:
You think the universal knowledge creation software is innate
Critical Rationalist:
How do you know that there are not other softwares that a) sometimes override the universal knowledge creation software, and b) cannot be overridden by the universal knowledge creation software because they are unconscious
Critical Rationalist:
*Unconscious and insulated from inputs from the universal knowledge creation software. This is just (on my hypothesis) how the brain is designed
curi:
is there a proposal of that nature which you find convincing?
curi:
i think the key issue here is that i'm judging by critical thinking, not by airtight proof that logically covers every possibility
Critical Rationalist:
Good.
Critical Rationalist:
I would say that it is perfectly possible that evolution could have produced such softwares, and I wouldn’t put confidence in any theories that hadn’t been subjected to experimental tests
Critical Rationalist:
My analogy I used yesterday was this: imagine that there was a theory that people had conjectured about the sun
Critical Rationalist:
In the absence of any data at all
curi:
is there a specific proposal which you find plausible, which explains the nature of the software, the selection pressure to create it, gives details about what it does, etc., which you think stands up to criticism?
Critical Rationalist:
I could tell a just so story
curi:
but i'm not asking for just so stories, i'm asking for ideas which you think survive criticism. a just so story is a story you have a criticism of.
Critical Rationalist:
That’s not the definition of a just so story
JustinCEO:
is there a specific proposal which you find plausible
If you're calling something a just so story that's a pretty good indicator you don't find it plausible, so bringing up just so stories is non-responsive
Critical Rationalist:
I don’t have a view about what is plausible in cases like this. My view is that what we should not settle on a perspective with much confidence in the absence of data
curi:
do you see some major flaw in my model?
curi:
g2g
Critical Rationalist:
It is logically possible and internally coherent
Critical Rationalist:
Whether or not it is true ought to be settled with empirical tests
curi:
is there a specific alternative model which you think can stand up to criticism? that we need a test to differentiate btwn it and my model?
Critical Rationalist:
Sure. I’ll put forward this as an alternative model
Critical Rationalist:
I don’t believe it, but I think it is also internally coherent and logically possible
Critical Rationalist:
There is other software that a) sometimes override the universal knowledge creation software, and b) cannot be overridden by the universal knowledge creation software because the (occasionally) overriding software is unconscious
Critical Rationalist:
If you want it to be more specific
Critical Rationalist:
I’ll say that the software is “empathy for kin”
Critical Rationalist:
There are plausible reasons why there would be selective pressures that favour it
Critical Rationalist:
And we’ll suppose that the empathy for kin overrides the universal knowledge software, but the reverse cannot happen (because of how the brain is built)
curi:
When I asked about a flaw in my model, I meant any type of flaw. Anything bad about it. But with emphasis on a problem with the model itself and its application to the world, not an issue in its ability to exclude alternatives, which is a somewhat separate matter. Just lacking logical errors isn't the whole question.
For the alternative empathy model, I think it's too vague to begin serious critical analysis. For example, you've introduced unconsciousness as a concept which is connected to the ability of software subroutines to write to certain locations in memory. Something like that? A lot more details would be needed to know what's going on there. Similarly, empathy for kin is underspecified. And simple examples of what you have in mind are underspecified. Like does this empathy for kin software take over my muscles and control my arm motions in some situations, and i'm like a puppet who watches helplessly as I can't control my limbs? If not that, what is it like? it somehow (how?) controls my conscious opinions, like mind control rather than puppetry? is the empathy for kin software able to create knowledge?
Critical Rationalist:
I’m going to bed now, but I’ll just say this. You are asking for a level of detail in my theory that you have not provided for your own. I can make similar requests for specificity. It will be easy to make my account as detailed as yours. So tell me, how do our classical computational capacities give rise to the creative ability to create new explanations? What selection pressures gave rise to that ability?
curi:
moving from #fi @Critical Rationalist https://discordapp.com/channels/304082867384745994/304082867384745994/663953331714261002
i'm open to more questions. i don't know what areas you find problematic or want to know more about. i think if you provide details for your ad hoc theory, you will run into problems just like how fleshing out the theory that DD will float when jumping off a building, in FoR ch 7, led to difficulties.
the selection pressure for intelligence may have been the value of better tool use, for example. we don't know the exact mechanism but there are several stories that work ok and, afaik, no criticism for why this wouldn't work. DD presents one in BoI re meme replication.
curi:
another possibility is it helped with communication and language, which enabled more effective group hunting
curi:
Yes, but those logical models depend on assumptions about the world that are 2. The claim that humans are best approximated as rational self-interested utility maximizers is a claim economists could be wrong about.
That isn't one of my claims. Think of a claim more like "everything else being equal, when demand for a product increases and supply stays the same, then the price must be raised to avoid shortages". there are premises here like that each buyer will pay up to a certain price for the product, rather than e.g. be willing to pay any even number of dollars but not an odd number of dollars. i'm aware that has non-zero connection to the empircal world. it is nevertheless different than doing a bunch of studies and science experiments to try to figure things out, which is my point. the empirical aspects of this claim are more limited than the empirical aspects of the claim that force equals mass times acceleration. the actual debates that take place re economics claims like my example are primarily non-empirical. do you agree there's a notable difference there? if so, what terminology would you like to use to keep this distinction clear? just calling my idea re demand and shortages "empirical" doesn't differentiate it from an issue like whether a particular vaccine works for humans and to prevent a particular parochial disease from earth.
you are welcome to try to point out empirical problems with economic models when you have them, but i don't think you'll have many empirical complaints about my core economic claims. i don't expect you to say "maybe Joe likes buying things with prime numbered prices. we better do a big study to see how many people buy in that way".
curi:
- Actual economic interactions are affected by human nature
i think a claim like my example above is approximately (but not literally 100%) independent of controversial conceptions of human nature like how empathetic or rational people are.
Your claims about which ethical systems will produce more wealth or welfare depend on assumptions about human nature.
What sort of human nature do you think would make not having division of labor be more productive than having it? Got anything plausible enough to merit a study to try to test what people are like?
curi:
[re human nature] I mean things like how we respond to incentive structures, under what circumstances we will cooperate or not cooperate, what makes people respond tribalistically or not, whether people develop better under strict parenting or permissive parenting
It is an open empirical question to what extent humans develop better under strict parenting, for example
I stand by the idea that economic models can only be true to the extent that their assumptions about human nature are true (eg that humans or aliens are self-interested rational utility-maximizers). Whether or not those assumptions are true is an accidental fact of evolution. There is no law of nature that says humans or aliens must be a certain way. It depends what selection pressures we happened to face.
You have a different model of how minds and personalities work than I do. Deciding which model is correct will initially involve specifying the models more, specifying our epistemologies more, and doing philosophical debate about those sorts of issues. Depending how those discussions went, it's possible an issue would come up where doing an empirical test made sense, but I doubt it. I wouldn't expect our discussion to get stuck over disagreeing about an empirical fact. (This does not mean we'd never mention anything empirical. I would expect some simple, uncontroversial empirical facts to be mentioned.)
(I'm now caught up. If i didn't respond to a specific thing you want a reply to, feel free to quote it and ask for a reply.)
Critical Rationalist:
I’ll stick with the issue of the empathy software for now. I’ve read chapter 7 of FoR several times, and I do not think my model suffers from the same problems. Very powerful kin empathy software could arise from selection pressures. Genes that favour altruistic Behavior towards kin at (almost) any cost actually make good evolutionary sense.
Critical Rationalist:
The reason for an overriding kin empathy software is clear: it gets more genes into the next generation. By contrast, all you have said is that “maybe it helped with tool use”. But why not just have tool creation software? A universal knowledge creation software seems wasteful.
Critical Rationalist:
I think this whole approach is backwards. In evolutionary biology (which Deutsch is not an expert in) what you are supposed to do is empirically discover what traits an organism (in this case, humans) have, and then reverse engineer those traits.
Critical Rationalist:
Crucially, I did not see in your response an explanation of how a classical computer could instantiated creativity. I asked “how do our classical computational capacities give rise to the creative ability to create new explanations?” You do not have a detailed account of how this happens. Do you see now that it is unfair to ask for a similar level of detail in my account? I will provide details for mine when you provide details for yours.
Critical Rationalist:
As it stands, I can tell an evolutionary story that is at least as plausible as yours. Neither of us have spelled out the details about how such software will be instantiated.
Critical Rationalist:
I guess I might as well give my two cents about your response to my economics arguments. The one example of an assumption you gave is instructive. It says “all else being equal, this will tend to happen”. There is an implicit claim in there about human nature, it is just one that is so uncontroversial that it is rational to accept it without doing an empirical study. But crucially, it’s connection to the real world is mediated by the “all things being equal” clause. Widespread errors in thinking or other elements of human nature could systematically prevent such a claim from mapping onto the real world. Don’t get me wrong, the kind of economics you’re describing has its virtues. I just think the possibility that human nature is such that our behavior is systematically different from the predictions of economics models is a possibility. @curi
GISTE:
@Critical Rationalist Selection pressures are not responsible for creating new genes. They are instead responsible for selecting the (already existing) genes that cause their hosts to have more grandchildren than compared to rival genes. (Disclaimer: I don't claim to be an expert on this.)
Critical Rationalist:
Yes that’s true. Random mutations create the genes, and then selection pressures eliminate the harmful ones and keeps the beneficial ones.
Critical Rationalist:
But natural selection is also a cumulative process. So you can get new traits over time with repeated instances of variation and selection.
Critical Rationalist:
@curi after this weekend I’ll probably have to stop commenting for the sake of school. There’s one topic I really wanted to ask about: All Women Are Like That. How can you hold to this in light of your belief that people have free will, are not determined by genes, and have universal knowledge creation software? Are women not people? Or is it just a coincidence that all women have used their unbounded free will incorrectly?
Critical Rationalist:
I’d also like you to share what specific traits you think all women share.
Alisa:
The AWALT phenomenon is due to things like culture and the prevalence of certain static memes, not genes
curi:
I was planning to make a discussion tree to organize our discussion but I'll drop that and try to do some quicker replies today. AWALT vs NAWALT is a specific debate about redpill/PUA ideas that you can google. the shared traits of women in question are related to romance and relationship behavior. the overall issue is what alisa says: culture, including static memes, are major forces in life.
Critical Rationalist:
And not a single woman has escaped the grasp of these static memes? Despite the fact that they have free will and universal knowledge creation software?
curi:
the all means something more like "i'm not convinced that a single NAWALT sighting posted to a redpill forum is actually true"
jordancurve:
Critical Rationalist: You seem to be unaware of what the word "all" means when used outside of a formal logical context
Critical Rationalist:
I’m confused. You’ll have to be more precise. Does “all” mean most?
Critical Rationalist:
Precision of hypotheses is a Popperian virtue. It makes them more amenable to rational and empirical refutation
curi:
there is an ongoing problem where people fool themselves into thinking their gf is different. AWALT is pushback against that. and i don't think any documented exceptions exist.
Critical Rationalist:
Loose and vague hypotheses are impossible to criticize
jordancurve:
You can criticize them for being vague.
curi:
you're wrong to call something loose and vague when, as i said, there are ongoing discussions about it. you can read tons more info about what it means if you want to.
curi:
the proper noun does not precisely summarize all the meaning.
curi:
this is typical of proper nouns
curi:
such as Critical Rationalism
Critical Rationalist:
Also, speaking of precision, give me precisely what traits all (whatever that means) women share in common
JustinCEO:
That just means being rationalistic critically rite
curi:
you can read about the traits if you want to learn. if you are expecting to learn this topic by being told a list of 10 traits each given 3 words of explanation, you're dramatically underestimating the complexity of the issiue
Critical Rationalist:
Why don’t you give me the most well-evidenced example, and as thorough an explanation as you want
curi:
because you need the redpill/PUA intellectual framework first before interpreting an example
Critical Rationalist:
Just as a basis for discussion
Critical Rationalist:
I have some passing familiarity with it. Try me. See how far you can get
curi:
were you already familiar with AWALT?
Critical Rationalist:
No that particular term was new to me
Critical Rationalist:
Totally serious
curi:
that sounds like near-zero familiarity
curi:
do you know what AFC is?
Critical Rationalist:
Hence “passing”
curi:
shit test? neg? hoops? two-set? DHV?
Critical Rationalist:
Haha wow I’m definitely less familiar than I thought
curi:
mystery method?
Critical Rationalist:
Ok, do you at least think that this is the kind of theory that should be put to empirical tests?
curi:
yes it's extensively field-tested.
Critical Rationalist:
Awalt is extensively field tested?
curi:
yes
Critical Rationalist:
Interesting
Critical Rationalist:
I’m genuinely curious, name me just one trait that “all” women share in common
curi:
all this stuff was developed with a heavy empirical testing emphasis. lots of the theory was created to explain observed patterns.
Critical Rationalist:
Ie not one documented exception
curi:
valuing social status as she perceives it (not everyone is into actors as high status).
curi:
if i said all parents were coercive, it wouldn't mean that there was any single thing (e.g. playing with matches) for which all parents coerce.
Critical Rationalist:
Yes, but in this case you said “all women are like that”. “Like that” has to mean something.
Critical Rationalist:
As far as your example, sure. I would wager that’s true of all humans (not just women). Completely innocuous
Critical Rationalist:
Sure, all women value status.
Critical Rationalist:
Completely banal
curi:
the issue ppl are debating is roughly: is there a woman who is immune to PUA?
Critical Rationalist:
Ok that’s more interesting
Critical Rationalist:
Since you’ve agreed that this is an issue that should be subject to empirical tests
Critical Rationalist:
This is what Popper said we must do before an empirical test: specify in advance what observations would falsify the theory (in this case “no women are immune to PUA”).
Critical Rationalist:
So, what empirical observation would falsify the claim that “no women are immune to PUA”? If you’re going to do an empirical test Popper-style, you have to answer that question.
Critical Rationalist:
If you systematically reinterpret the results to make them consistent with your theory, you’re doing what Popper (rightly) accused Freud and Marx of doing.
curi:
you seem to want a single decisive test to settle this conclusively. no one has done one or knows how to do one.
curi:
hence the ongoing debates
Critical Rationalist:
You said you believe this issue should be subject to empirical tests.
curi:
PUA approaches have been broadly testing on many women to help refine them, they aren't ivory tower speculation
Critical Rationalist:
So you believe the theory has been subject to tests, but can you explain to me what an empirical test is, in Popper’s theory?
Critical Rationalist:
To be clear, I’m not asking about the relative advantage of PUA. It might be on average better than other methods
Critical Rationalist:
Im talking about testing this theory: no women are immune to PUA
Critical Rationalist:
You admit that this is the sort of claim that should be tested empirically
curi:
people have said over and over "my gf is different" and they seem to be wrong every time. and ppl keep saying it. that's the issue AWALT is about.
Critical Rationalist:
So, explain to me how, according to Popper, we empirically test theories
Critical Rationalist:
you also said the issue is “are any women immune to PUA”
Critical Rationalist:
Implying that this was part of the meaning of awalt
curi:
right: different than the other girls who PUA works on.
Critical Rationalist:
Good
Critical Rationalist:
You believe that issue should be empirically tested
curi:
no one on either side has any idea for how to test it in the way you want. some things are hard to test.
Critical Rationalist:
How does Popper believe we should perform empirical tests?
curi:
nevertheless, there is nothing even resembling a documented counter example AFAIK
curi:
and there are many, many documented examples where AWALT turned out corrected
curi:
and ppl don't respect this situation and are super biased
Critical Rationalist:
I would like an answer to my question
curi:
a test is an observation aimed to potentially refute an idea. the best tests address a clash between 2+ ideas, such that at least one has to be refuted by any outcome of the test.
Critical Rationalist:
Good, exactly. For Popper, an empirical test only counts as a test if it is a genuine attempt at refutation
Critical Rationalist:
So... if you have not specified in advance the conditions for falsification, then for Popper, you have not actually empirically tested a theory
curi:
no
Critical Rationalist:
So, given that you and PUAs have not specified the conditions for falsification in advance, you have not actually performed empirical tests
Critical Rationalist:
No? Are you alleging that I’ve misunderstood Popper? I’m happy to provide quotes
curi:
you said "So" like you're following on what I said, but then you introduced a new thing: specifying conditions in advance.
Critical Rationalist:
Do you think Popper thought you could specify the conditions for falsification after the experiment?
curi:
we never fully specify anything, as Popper explained
curi:
if you mean that the conditions for falsification have to be partially specified in advance, i'll agree, but that's a different claim.
Critical Rationalist:
I’ll brb with quotes.
Critical Rationalist:
Also, it goes without saying you can disagree with Popper on this issue
curi:
do you agree that "we never fully specify anything"?
Critical Rationalist:
In a certain sense. But I’ll get the quotes
Critical Rationalist:
Yes, there is a certain sense in which we cannot fully specify anything (I'm interested for you to spell out why that's relevant).
Critical Rationalist:
But here's the quote. "Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory."
Critical Rationalist:
So, have you (or the PUAs) made "serious...attempt(s) to falsify the theory" that no women are immune to PUA?
curi:
i don't understand why you dug up a quote that doesn't mention specifying falsification conditions in advance. also please only post sourced quotes at my forums.
curi:
and yes PUAs have searched widely for NAWALTs
Critical Rationalist:
It is from Conjectures and Refutations. Page 36 http://www.rosenfels.org/Popper.pdf
Critical Rationalist:
So they have made genuine attempts to falsify theory and have failed to do so?
Critical Rationalist:
So... what kind of observation would count as falsification?
curi:
a NAWALT
Critical Rationalist:
What observations would count as observation of a NAWALT
curi:
that's complicated and involves understanding a bunch of theory with which to interpret data
Critical Rationalist:
As far as specifying in advance, this quote comes from the next page.
Critical Rationalist:
"Some genuinely testable theories, when found to be false, are still upheld by their admirers--for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status."
curi:
if you can point to that ever being done with AWALT, i'd be interested
JustinCEO:
Right ad hoc stuff bad
JustinCEO:
Ppl want to find a NAWALT tho
Critical Rationalist:
NAWALT is too broad
Critical Rationalist:
I'm talking about an observation that would refute this theory: "no women are immune to PUA"
Critical Rationalist:
You said that^
Critical Rationalist:
as a concrete example of what AWALT means
Critical Rationalist:
Don't give me jargon. Tell me what observation would refute this claim "no women are immune to PUA"
curi:
read the Girls Chase book if you want to begin to understand what we're talking about
Critical Rationalist:
If you have subjected your theory to Popperian tests, then you should be able to answer that question
Critical Rationalist:
Does the Girls Chase book explain what observation would falsify the theory that "no women are immune to PUA"? What chapter explains that?
curi:
i don't think you're trying your best to understand my perspective. you're trying to shoehorn the discussion into your preconceived notions of how to be Popperian.
curi:
while neglecting issues like the use of complex theoretical frameworks to interpret data
Critical Rationalist:
@curi you're doing exactly what GISTE was doing
curi:
and you seem to want to be able to test and debate something without understanding the topic.
Critical Rationalist:
refusing to answer questions when it gets difficult
Critical Rationalist:
you told GISTE that he should answer the question
Critical Rationalist:
you should abide by your own standard
curi:
i've just spent a while answering your question. you don't like the answer.
curi:
the specifications re the testing are complicated and you don't have teh background knowledge to discuss them.
curi:
that's your answer.
Critical Rationalist:
Really? I missed it. What observations would count as a falsification of this theory: "no women are immune to PUA"
JustinCEO:
If a complex theoretical framework is required to interpret data then pointing out that fact and a concrete place where you can get info with which to develop such a framework is not a dodge
Critical Rationalist:
you said at one point "a NAWALT". That's not an observation.
Critical Rationalist:
That is too flexible.
curi:
it gets less flexible if you learn the field. you just aren't familiar with the constraints involved and can't be told them in 5min while adversarial.
Critical Rationalist:
Adversarial? I'm asking genuine questions. I am willing to hear you explain it in detail. I place no time limits on your explanation (it doesn't have to be within 5 minutes).
curi:
but if that was true you'd read multiple books as part of the conversation.
Critical Rationalist:
Remember what you said to giste, and remember what you said on your page: picky arguments matter
JustinCEO:
CR u seem unwilling to let curi incorporate a book as part of his explanation so your length claim seems false
Critical Rationalist:
Sometimes recommending a book is a way of avoiding conversation.
Critical Rationalist:
I will read the book if you can tell me which chapter answers my question. Which chapter (or chapters) answer this question: What observations would count as a falsification of this theory: "no women are immune to PUA"
Critical Rationalist:
I doubt the author of the book even considers a question as technical as that
Critical Rationalist:
If I'm wrong, I want page numbers
curi:
there is no chapter with a direct answer to that question, it provides some of the framework with which to discuss tha tmatter, as i told you.
JustinCEO:
CR you seem to be implicitly conceding that your no time limit claim is false by raising arguments against reading books
Critical Rationalist:
@curi if during our debate about the software of the mind, I required you to read all of "How the Mind works" by Steven Pinker (without specifying which parts were relevant), would that have been a fair request?
curi:
i routinely respond to books during discussions
Critical Rationalist:
Do you read the books in their entirety?
Critical Rationalist:
Would you read all of "How the Mind Works" if I asked you to?
curi:
you're welcome to propose a better way to become familiar with the field, or to point out problems with Girls Chase.
curi:
it's up to you whether you're interested in learnign about this. idc
Critical Rationalist:
@curi that isn't answering my question
curi:
you seem to want a really short version containing certain specific things, which i don't have to offer you.
Critical Rationalist:
I'm wondering if you think it is legitimate to require a conversation partner to read a whole book
curi:
i didn't require you to
Critical Rationalist:
You can't apply a standard to someone else if you won't apply it to you
Critical Rationalist:
ok
curi:
https://curi.us/2235-discussions-should-use-sources
curi:
and i proposed the book as a potential way to make progress. if you have a better one, feel free to suggest it.
Critical Rationalist:
Well, I have a different rival theory of how women work
Critical Rationalist:
It is explained in How the Mind Works (which does deal extensively with sexuality)
Critical Rationalist:
I propose that you read that book before we continue
curi:
does it cover shit tests?
Critical Rationalist:
No...
Critical Rationalist:
I'm just saying, for you to understand my perspective, you have to understand the details of my theoretical framework
curi:
since shit tests have been observed many times, why aren't they covered and explained?
Critical Rationalist:
And I can't explain my theoretical framework in conversation, so you have to read How the Mind Works
Critical Rationalist:
Unless
curi:
do you mean that or are you just trying to mirror what you think i said?
Critical Rationalist:
you can propose an alternative way
Critical Rationalist:
Evolutionary psychology (my own view of how human sexuality works) is a complicated theory that takes time to understand
Critical Rationalist:
if I'm going to be expected to read a book (or a comparable alternative), I think this would be fair
Critical Rationalist:
we would both have a better understanding of each other's approach
curi:
i'm already familiar with evo psych
Critical Rationalist:
what is the evolutionary psychology explanation for sex differences in human jealousy?
curi:
the evo psych framework is compatible with more than one explanation for that.
Critical Rationalist:
(you asked me questions about the PUA theories to see how familiar I was)
Alisa:
I don't know evo psych, but I would say: the asymmetrical resources each sex invests in child rearing
Critical Rationalist:
Name one that has been offered for jealousy
Critical Rationalist:
Alisa: not quite
Alisa:
Fair. Was just a guess.
Critical Rationalist:
That is an explanation of many sex differences tho
Critical Rationalist:
So it was a good guess
curi:
i don't read much at that level of detail b/c it's irrelevant to my (DD's) criticisms of evo psych
Critical Rationalist:
right, so just as I don't have a detailed understanding of PUA, you don't have a detailed understanding of evo psych
Critical Rationalist:
so... if it is fair for you to propose a book, it is fair for me to propose a book
curi:
if you were familiar with some higher level PUA theory and had a refutation of it, and skipped some details, that would be comparable.
curi:
it would still not put you in a position to debate AWALT vs. NAWALT given PUA/redpill premises though
curi:
i haven't tried to jump into a debate between different applications of evo psych
Critical Rationalist:
right, in order to do that, I need to know details. Well, in order to understand what I deem to be the correct explanation (i.e. the rival theory for why women do particular things), you need to know details about evo psych
Critical Rationalist:
Becoming familiar with higher level PUA theory does not require details.
Critical Rationalist:
by "in order to do that", I mean AWALT and NAWALT
curi:
i don't know what you want to get out of this. you seem to want to call me Wrong about an issue you don't know or care about.
curi:
b/c you didn't like the choice of words that make up a particular jargon
curi:
which were, i will readily grant, not chosen in a way to make friends with the mainstream, and aren't normally used for outreach
JustinCEO:
Perhaps a different topic would be more fruitful to discuss??
Critical Rationalist:
@curi you listed this as a debate topic on your page. I read through your list and this issue jumped out at me. I am deeply interested in human sexuality (I mean, who isn't?). You are trying to read into my behavior bad motivations. And now you are saying "you just want to prove me wrong". You are doing exactly what Giste did when he accused me of being in debate mode
curi:
if you're deeply interested then why don't you begin reading material from this school of thought?
Critical Rationalist:
Also like him, you are refusing to answer my questions. When Giste did this, you (rightly) called him out on it (no hard feelings giste).
curi:
until you find some objection to it
JustinCEO:
Ya read to first objection
curi:
you're trying to jump into the middle of an internal debate you aren't familiar with
Critical Rationalist:
@curi by affirming PUA, you are implictly rejecting evo psych. You are thus taking sides on an issue when you don't understand the rival theory. You're in the same position as me (but a mirror image)
curi:
what are you talking about? PUAs routinely use evo psych explanations.
Critical Rationalist:
I guess I should say your version of pua, they are compatible
Critical Rationalist:
yes I've actually heard that, that's fair
JustinCEO:
You guys could both read to first objection on a suggested book
Critical Rationalist:
I think this matter though
curi:
my objections to evo psych have nothing to do with PUA
Critical Rationalist:
Let me use an analogy
Critical Rationalist:
Let's think about Einstein's theory
Critical Rationalist:
The paradigm case of a falsifiable theory
curi:
wait slow down
curi:
by affirming PUA, you are implictly rejecting evo psych
do you retract this?
Critical Rationalist:
Oh yes 100%
Critical Rationalist:
Anyways like I was saying
Critical Rationalist:
The theoretical details of Einstein's theory are very hard to understand
Critical Rationalist:
much harder to understand than PUA or evo psych
curi:
You are thus taking sides on an issue when you don't understand the rival theory.
do you mean that i don't understand what NAWALT means?
Critical Rationalist:
No, I meant the rival theory, evo psyc. But I retracted the implication that they are rival theories
Critical Rationalist:
Anyways
Critical Rationalist:
Despite the theoretical sophistication, Einstein was still able to say "this is the observation that will refute my theory" in clear terms.
curi:
yes because he was dealing with stuff that's much easier to measure and do math about, etc.
Critical Rationalist:
@curi I won't talk by implication. I do not think you have a clear understanding of what observations will falsify this claim "no women are immune to PUA"
curi:
other fields, like those involving human behavior, have a much harder time measuring things. takes more theory to do that.
Critical Rationalist:
I strongly suspect that you do not have an answer.
Critical Rationalist:
I was texting someone else in the group, and I am not the only one with this suspicion
Critical Rationalist:
When you don't answer a question, it makes you look bad.
curi:
can you quote a question i didn't answer?
Critical Rationalist:
What observations would count as a falsification of this theory: "no women are immune to PUA"
curi:
i did respond to that
Critical Rationalist:
So, tell me what the observations are?
curi:
do you remember me responding?
Critical Rationalist:
well, you did say NAWALT. But that is not a statement about what you would observe. Let me say something about that answer. It is actually just a tautology. A NAWALT is just "a woman who is not like that". In other words, you are just answering by saying the observation that would falsify the theory is the observation that the theory doesn't predict
Critical Rationalist:
That would be like Einstein saying "an observation that is not predicted by general relativity would falsify the theory"
curi:
do you remember me responding?
Critical Rationalist:
But what Einstein actually said was "if you see the points of light here rather than here, that falsifies the theory".
Critical Rationalist:
Yes I do now remember, you said NAWALT
curi:
you didn't remember before?
Critical Rationalist:
But I'm explaining why that is insufficient
Critical Rationalist:
No I forgot about that answer when I was typing. Thank you for helping me remember.
curi:
do you agree that a response you consider insufficient is different than no response?
Critical Rationalist:
yes of course
curi:
do you retract everything you said comparing me to GISTE?
Critical Rationalist:
Well, during the earlier part of the conversation
Critical Rationalist:
I followed up to your NAWALT answer by insisting on something more specific
Critical Rationalist:
that was approximately when you started proposing that I read a book
Critical Rationalist:
(if I remember correctly)
Critical Rationalist:
Which is still not answering the question
curi:
AWALT and NAWALT are jargon terms which refer to many books, articles and discussions. thousands of pages of material. is there a particular part of that literature which you think is inadequately specific?
Critical Rationalist:
But I am asking for specificity in terms of what observation counts as an instance of a NAWALT in a Popperian test. I bet that none of the material you mention gives specificity in that sense
Critical Rationalist:
And if they do, just quote it or point me to page numbers
curi:
you want physics-like specification. the field doesn't have that.
curi:
do you think evo psych has that?
Critical Rationalist:
Not physics level, but evo psyc theorists make predictions and test them.
Critical Rationalist:
They do say in advance what would count as falsification of their specific hypotheses
curi:
PUAs have made and tested many predictions.
Critical Rationalist:
I'm more than happy to give examples
Critical Rationalist:
Ok great!
curi:
e.g. "I think X would be a good opener". then try it 20 times.
Critical Rationalist:
Tell me what predictions follow from this theory (the original topic): "no women are immune to PUA"
Critical Rationalist:
Remember, if that theory is empirically testable in a Popperian sense, if the predictions are not corroborated, the theory should be considered falsified
curi:
it predicts things like e.g. Joe Newbie will never find a NAWALT, and if he claims to have found one he's fooling himself.
Critical Rationalist:
"if he claims to have found one he's fooling himself" this sounds suspiciously like an ad hoc hypothesis designed to save the theory from refutation
Critical Rationalist:
but again
curi:
if you review the literature and find inappropriate use of ad hoc hypotheses, feel free to point them out.
Critical Rationalist:
that is not an observational prediction I can test. I need to know what observations count as an instance of a NAWALT
curi:
you will find in most cases that Joe is fooling himself in highly repetitive ways that were already written about at length.
Critical Rationalist:
in most cases?
curi:
that's the typical discussion
curi:
the concepts AWALT and NAWALT are not specified as exactly as you'd like (like physics). i already told you this but you keep bringing it up. i don't see the point.
Critical Rationalist:
let me give you an example of how evo psyc works
Critical Rationalist:
so you can see what I mean
Critical Rationalist:
one evo psyc explanation of male homosexuality
Critical Rationalist:
was that genes for being gay also lead to increased giving to kin. This means gay uncles invest more in nieces and nephews than straight uncles.
Critical Rationalist:
Because of kin selection, those genes can be selected for
Critical Rationalist:
This theory lends itself to a prediction: gay uncles should be measurably more generous to kin than straight uncles
Critical Rationalist:
That turns out to not be true
Critical Rationalist:
So the theory is falsified
Critical Rationalist:
Now, let me give you this
Critical Rationalist:
your example of "this pickup line is superior"
Critical Rationalist:
that is DEFINITELY testable
Critical Rationalist:
I would never dispute that
Critical Rationalist:
it is very easy to run natural experiments on that
curi:
PUA is a body of knowledge that has used lots of testing
curi:
that's all i said
Critical Rationalist:
but this claim "no women are immune to PUA"
Critical Rationalist:
I think it should be testable
curi:
i also said there were no known documented counter examples to AWALT
Critical Rationalist:
what would count as a documented counterexample?
Critical Rationalist:
tell me
curi:
if you have one you think qualifies, let me know
Critical Rationalist:
no, you have to explain what observation would count as someone qualifying
Critical Rationalist:
maybe your explanation won't be complete
curi:
it's explained in a very roundabout, complicated way for thousands of pages
Critical Rationalist:
but get me started
curi:
that's all u get, sorry
curi:
that's what exists for that debate
curi:
also i think a evo psych example with a passed test would be more enlightening.
Critical Rationalist:
a different theory of male homosexuality is this
Critical Rationalist:
there is a gene on the x chromosome (males have one, females have two) which causes increased attraction to men. In males this makes them gay, in females it makes them extra fertile. This would allow the gene to continue to exist.
Critical Rationalist:
This theory makes a prediction.
Critical Rationalist:
Female relatives of gay men (who share that gene on the x chromosome) should have more children on average
curi:
that prediction doesn't follow
Critical Rationalist:
For now, this theory has in fact been corroborated
Critical Rationalist:
Why not?
curi:
how do you get from increased attraction to more children? could easily result in fewer children.
Critical Rationalist:
You might have misunderstood
curi:
do you mean the gene does different things for the different genders?
Critical Rationalist:
one way of reading it is that the gene makes the holder want to have sex with men more
curi:
what does that have to do with fertility?
Critical Rationalist:
I mean fertility in the sense of producing more children
Critical Rationalist:
in women, wanting sex with men leads to more children (in our evolutionary past, no condoms)
curi:
that's what i'm saying doesn't follow
curi:
wanting sex and getting sex are different things
Critical Rationalist:
ok good, so a good followup experiment would measure the number of sex partners
Critical Rationalist:
now, as you know
Critical Rationalist:
when observations occur as the theory predicts
Critical Rationalist:
it doesn't prove the theory, it only corroborates it
curi:
are you going to respond to me?
Critical Rationalist:
which is why you try to do as many tests as you can
Critical Rationalist:
what question?
curi:
the non sequitur issue
Critical Rationalist:
well, given evolutionary dynamics, there are always men who want to have sex with women (for reasons having to do with differential parental investment, which @Alisa mentioned)
Critical Rationalist:
so increased desire for sex (in women) would reliably lead to more sex
curi:
do you think it reliably leads to more sex today?
Critical Rationalist:
because they are the gatekeepers (as a PUA I'm sure you believe this)
Critical Rationalist:
yes, if women want more sex, they will usually get it
curi:
can you think of any reasons they wouldn't? any ways this can go wrong?
Critical Rationalist:
of course! hence the need to do followup experiments! corroboration does not equal proof
Critical Rationalist:
just like with Einstein
curi:
hold on
Critical Rationalist:
the fact that the starlight was where it was does not PROVE he was right
curi:
when you have a problem with the logic of your theory, testing it more times doesn't help
Critical Rationalist:
there are other explanations
Critical Rationalist:
Ok, lets compare this with Einstein
curi:
the tests are all premised on that logic
Critical Rationalist:
his theory predicted that starlight would be here rather than here
Critical Rationalist:
but there are other possible reasons for the light to be in that location
curi:
you're saying something like "X will cause Y which will cause Z so we'll measure Z to learn about X", right?
Critical Rationalist:
no
Critical Rationalist:
we say "x will cause y which will cause z"
Critical Rationalist:
we look to see if there is z
Critical Rationalist:
if there is no z, theory is falsified
Critical Rationalist:
if there is a z, the theory is not proven right
Critical Rationalist:
same with Einstein
curi:
so if Y would cause Z or not-Z, then the test doesn't work right due to the theory being logically confused?
Critical Rationalist:
"x (Einsteinian gravity) will cause y (curved spacetime) will cause z (star light here rather than here)"
Critical Rationalist:
if by y you mean "increased sexual desire", then we have other theoretical reasons for believing that (in women) increased sex drive will cause more sex partners (z)
Critical Rationalist:
parental investment theory
Critical Rationalist:
As I've said, I'm sure you already agree with that anyways
curi:
i asked if there were reasons it could lead to less sex. you said yes. but then instead of investigating this problem you suggested running extra tests which are premised on the idea that more attraction would lead to more sex.
Critical Rationalist:
are there possible reasons that spacetime could lead to the light NOT being where Einstein predicted?
Critical Rationalist:
yes, there could be other forces acting on the light that we don't know about
Critical Rationalist:
there are always possibilities like that
Critical Rationalist:
(which you can test on their own)
curi:
suppose, hypothetically, that increased attraction reduces the amount of sex a woman has by 50%. then would the results of your proposed tests be misleading?
Critical Rationalist:
you mean if women who wanted sex more had 50% less sex?
curi:
yes
Critical Rationalist:
yes, then the prediction would not follow
curi:
ok and could you solve this problem by doing more tests?
curi:
test it 100 times instead of 10
Critical Rationalist:
no
Critical Rationalist:
you would test that claim
curi:
2:36 PM] curi: can you think of any reasons they wouldn't? any ways this can go wrong?
[2:36 PM] Critical Rationalist: of course! hence the need to do followup experiments! corroboration does not equal proof
Critical Rationalist:
I mean followup experiments with different methodologies
Critical Rationalist:
i.e. test for a relationship between female sex drive and number of sex partners
Critical Rationalist:
Ok
Critical Rationalist:
Everyone who is watching
curi:
ok do you think that testing has been done?
Critical Rationalist:
I want you all to take note of something
Critical Rationalist:
(before I answer @curi's next volley of questions)
Critical Rationalist:
I do not know if that testing has been done or not
Critical Rationalist:
I asked @curi for specific observational predictions based on his theory. He said "NAWALT". When I asked him to explain what observations would count as an instance of NAWALT, he said "it's explained in a very roundabout, complicated way for thousands of pages. that's all u get, sorry". When he asked me for specific observational predictions based on evo psych, I answered. I gave real world examples from real experiments. I gave one example of an experiment that FALSIFIED an evo psych hypothesis, and I gave one example of an experiment that CORROBORATED an evo psych hypothesis. He asked a followup question about whether the corroborating experiment actually counted as corroboration, and I explained why it does by comparing it to the case of Einstein. I tried to use as little jargon as possible. If @curi asks me to explain any jargon I left unexplained, I will be happy to do so. There is a clear asymmetry here.
Critical Rationalist:
If anyone thinks my account of this conversation is inaccurate, I encourage you to read it for yourself.
curi:
do you think there exist examples of PUA openers or concepts which were falsified?
Critical Rationalist:
I stated (and never disputed) that the relative efficacy of openers is falsifiable.
curi:
ok so some evo psych ideas and some PUA ideas are relatively easy to test. so what?
Critical Rationalist:
You have not explained how "no women are immune to PUA" is falsifiable.
Critical Rationalist:
If you think there are some evo psych ideas that are not falsifiable, please tell me what you think they are.
Critical Rationalist:
I don't think there is an analogous unfalsifiable claim.
curi:
i asked for an example of an evo psych idea that passed some testing. the example you gave depends on an untested (as far as you know) premise which one can immediately think of major flaws with. why do you think that constitutes meaningful corroboration?
Critical Rationalist:
What did I say in response?
Critical Rationalist:
Did you read my Einstein analogy?
Critical Rationalist:
Einstein's prediction that starlight would be "here rather than here" requires untested assumptions
Critical Rationalist:
You always need auxiliary assumptions to get from a theory to a prediction (this is well understood in philosophy of science). You then can test those assumptions after
Critical Rationalist:
Do you disagree with Popper? Do you not think that Einstein's theory was meaningfully corroborated by the 1919 test?
curi:
how do you differentiate your method from the following: i think there is a gene which makes people like to eat fish. i assume, without testing, that liking to eat fish gives people better skin quality which leads to being more attractive which leads to more sex. i measure babies and correlate it to that gene. i say my whole theory is corroborated.
Critical Rationalist:
how would that explain male homosexuality?
curi:
it doesn't. it's a different theory.
Critical Rationalist:
... that is the reason the explanation was conjectured
Critical Rationalist:
so I would criticize you theory because it doesn't explain what it is supposed to explain
curi:
i'm giving a toy example to discuss a concept. does that make sense to you?
Critical Rationalist:
No. The claim that there is a gene on the x chromosome that leads to increased attraction to males was postulated to explain male homosexuality
Critical Rationalist:
that is why it was postulated
curi:
do you know what a toy example is?
Critical Rationalist:
your theory does not explain that datum at all
Critical Rationalist:
so it would be criticized on that basis
curi:
busy?
curi:
do you think any untested assumptions are allowable and it's still corroboration, or only certain categories?
Critical Rationalist:
I tested assumptions are allowable so long as they can be tested later
Critical Rationalist:
And as long as they’re consistent with other theories etc
curi:
anything which can be tested later is allowable?
curi:
oh, consistent with which other theories?
Critical Rationalist:
Well, yeah you could put additional constraints. Consistent with other corroborated theories etc
Critical Rationalist:
You still haven’t engaged with my Einstein analogy
curi:
your premise (female more attracted to men -> more sex) is inconsistent with many theories.
curi:
that's why i objected to it
Critical Rationalist:
Oh yeah no i meant to theories that are well corroborated thank you for the objection
Critical Rationalist:
Allows my to clarify
curi:
it's inconsistent with many high quality theories, not just arbitrary junk
curi:
i'm not talking about the space of logically possible theories
Critical Rationalist:
@curi this line of questioning is important and interesting
curi:
it's inconsistent with a variety of things that i and many other people believe and have extensive reasons for
curi:
there are many books about such things
Critical Rationalist:
But I’m going to have to remind you of the asymmetry
Critical Rationalist:
When you asked for a specific experimental test of an evo psyc theory
Critical Rationalist:
I gave you an example
Critical Rationalist:
A concrete example of how an observation can rule out an evo psyc theory
Critical Rationalist:
For any evo psych theory
curi:
i think it's a good example of the quality of the work in the field b/c it assumed a very questionable premise.
Critical Rationalist:
I’d be happy to do this for you
Critical Rationalist:
But when I challenged a specific PUA theory
curi:
AWALT is like a meta study
Critical Rationalist:
you only said “NAWALT”, and couldn’t tie it to a concrete observation
curi:
it's a belief about the overall state of many other tests, ideas, debates, etc.
Critical Rationalist:
You could not specify what observations would falsify the theory
Critical Rationalist:
Even though you think the theory is testable (in Popper’s sense)
Critical Rationalist:
When I explain how my theories are testable
Critical Rationalist:
I give details
Critical Rationalist:
I answer followup
curi:
but your details are problematic
Critical Rationalist:
You think so
Critical Rationalist:
I explained why they aren’t with the Einstein analogy
Critical Rationalist:
Which you haven’t responded to
curi:
want me to give details that you consider problematic? would that satisfy you?
Critical Rationalist:
But you haven’t even BEGUN to do the same for your theory
Critical Rationalist:
Well, it’s not just enough for me (or you) to consider something problematic
Critical Rationalist:
What matters is arguing for their problematic nature
curi:
you seem to think saying stuff i consider bad quality research is a good start. i don't know why you think that should count for a lot.
Critical Rationalist:
You tried, and I responded (my response has been left alone)
Critical Rationalist:
It doesn’t matter what you consider to be bad
Critical Rationalist:
You have to argue that it is bad
curi:
i asked if you could think of reasons your premise is false
curi:
you said yes
curi:
instead of asking for mine
Critical Rationalist:
I criticized that argument
curi:
so we didn't go into those details because you conceded
Critical Rationalist:
The same is true for Einstein’s prediction
Critical Rationalist:
Which Popper thought was a paradigm case of empirical testing
Critical Rationalist:
Again, still waiting
curi:
can you think of reasons that matter that the premise would be false, not just picky logically-possible stuff? this is what i meant in the first place.
Critical Rationalist:
The reasons in the Einstein case also matter
curi:
what is your best argument that the premise is false that you know of?
Critical Rationalist:
There really could be other forces interacting with the curvature of space time
Critical Rationalist:
And don’t forget
Critical Rationalist:
“Picky” isn’t bad
curi:
you're trying to dismiss infinitely many possible objections b/c there are always infinitely many possible objections. this was not the point i was making
Critical Rationalist:
No that’s not the response I made.
Critical Rationalist:
At a later point I will explain my Einstein response again if you wish. For now I have to go. I recommend that you read my Einstein response as I originally put it, and really try to understand it.
curi:
i already know what you're saying but you aren't following me and you keep trying to fix this by explaining CR to me.
curi:
You always need auxiliary assumptions to get from a theory to a prediction (this is well understood in philosophy of science). You then can test those assumptions after
Critical Rationalist:
Note again that you have not even begun to do something analogous for your theory. I think I’ve explained the problem.
curi:
that comment deals with the infinity of possible objections
Critical Rationalist:
But yeah I really do have to go for now. Take a look at the passages about x causing y which causes z
Critical Rationalist:
Your argument against the corroboration of the evo psyc theory would work almost exactly the same way against the corroboration of Einstein’s theory
curi:
you don't know what my argument is
curi:
you made incorrect assumptions about it
curi:
i don't have an objection re Einstein. while your assumption contradicts ideas bordering on common sense.
curi:
that's a difference. it's not "something could be wrong" but actual known criticism. like if someone assumed 2+2=5 as a premise, that has known criticism in a way Einstein's premises did not.
curi:
i illustrated this with a toy example where i put an intentionally dumb premise in the middle, but you didn't understand it and also wouldn't followup and try to clear up the issue.
curi:
curi:
i'm giving a toy example to discuss a concept. does that make sense to you?
CR:
No.
curi
do you know what a toy example is?
CR
[no answer]
curi:
you switch topics frequently without resolving them. however one asymmetry in the discussion is that we've established and mutually agreed that you made mistakes. while you have not established any specific mistake by me.
curi:
my guess is you will lose patience and stop discussing prior to https://curi.us/2232-claiming-you-objectively-won-a-debate
curi:
you will give up without an impasse chain https://elliottemple.com/essays/debates-and-impasse-chains nor provide some other written methodology by which you think you won any specific debate point.
curi:
i hope i'm mistaken about this. i haven't given up. curious what you think about discussion goals like those.
curi:
i think you're overly focused on making inconclusive comments re big picture instead of resolving specific small conversational branches.
Critical Rationalist:
One quick point of clarification. When I said "no" in response to your toy example, I was not saying that I didn't understand your example. I understood your toy example, I just thought it was inadequate as a rival theory to mine.
curi:
i asked a direct question, and you gave a direct answer, but you weren't answering and then ignored me when i tried to clarify further?
Critical Rationalist:
I understand your toy example.
curi:
your prior comments had indicated you did not understand it.
curi:
you kept trying to relate it to homosexuality, which it did not mention.
curi:
and you persisted in that after i clarified that it wasn't related to homosexuality
curi:
I understood your toy example, I just thought it was inadequate as a rival theory to mine.
this statement is self-contradictory. the second half shows you don't understand it.
curi:
b/c it wasn't a rival to yours.
Critical Rationalist:
Ok, I see. So your example is meant to criticize that the link between the theory (a gene on the x chromosome causes homosexuality) and the prediction (female relatives of male homosexuals will have more sex partners)
Critical Rationalist:
The link between the theory and prediction is called an auxiliary assumption. Do you know what an auxiliary assumption is?
Critical Rationalist:
You are essentially saying "you haven't corroborated the auxiliary assumption (in this case, that women who want more sex will get more sex as a result)"
curi:
that is not what i'm saying, no
Critical Rationalist:
Ok, please clarify.
Critical Rationalist:
Here is my claim
curi:
i said i think the assumption is bad.
Critical Rationalist:
So you agree that it is legitimate in principle to use untested auxiliary assumptions? You just think this particular auxiliary assumption conflicts with other (well corroborated) theories?
curi:
i didn't say how well corroborated the other theories were. we often use non-empirical criticism, e.g. logical points.
curi:
i agree it's legitimate in principle, but you have to use critical thinking to limit it, not do it arbitrarily.
Critical Rationalist:
Ok sure, so you just think that this particular auxiliary assumption conflicts with other well corroborated OR logically unrefuted theories?
curi:
is this the research you're talking about? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1691850/pdf/15539346.pdf
Critical Rationalist:
Yes. More than one experiment (to the best of my memory) has been done in this area.
curi:
does one of the other papers talk about attraction to males?
Critical Rationalist:
No, strictly speaking they are agnostic to the exact mechanism by which the gene on the x chromosome causes increased female fecundity.
curi:
so the claim you made, as an example of something corroborated, is not part of the research?
curi:
and the assumption i doubted is also not part of the research?
Critical Rationalist:
Strictly speaking, the claim made by the researchers is that the gene on the x chromosome causes homosexuality in males but increased female fecundity in females. It is agnostic as to mechanism. The idea that the gene causes increased attraction to males strikes me as plausible. However, if you think that the fact that this mechanism is not described by the researchers, I'm happy to use a different example of corroborated evo psych theories.
curi:
you're not speaking strictly, though. e.g. you speak of "the gene" but they don't. right?
Critical Rationalist:
Oh yes they have localized a gene
Critical Rationalist:
One second
curi:
https://www.newscientist.com/article/dn6519-survival-of-genetic-homosexual-traits-explained/
Camperio-Ciani stresses that whatever the genetic factors are, there is no single gene accounting for his observations.
is Camperio-Ciani wrong or misreported?
Critical Rationalist:
When I say they have localized a gene
Critical Rationalist:
I do not mean "the" gene that explains homosexuality.
Critical Rationalist:
It is a gene which makes a male more likely to be homosexual.
Critical Rationalist:
Complex traits like homosexuality are polygenic.
Critical Rationalist:
One gene that was localized by this kind of research was Xq28
curi:
is Xq28 a gene?
Critical Rationalist:
yes
Critical Rationalist:
Now, I'm not particularly interested in the details of this example (if it happens to have a false auxiliary assumption, I can give many other examples of corroborated evo psych theories: patterns of male vs female sexual jealousy, sex differences in preference for casual sex)
curi:
https://bmcgenomics.biomedcentral.com/articles/10.1186/1471-2164-7-29
Well known for its gene density and the large number of mapped diseases, the human sub-chromosomal region Xq28 has long been a focus of genome research.
why would a gene contain gene density?
Critical Rationalist:
The point is to say that this is how you are supposed to test theories.
Critical Rationalist:
Make a theory, use some auxiliary assumptions (you still have not indicated if you understand what these are) to form predictions, then test the predictions.
curi:
curi:
why would a sub-region of a gene contain at least 11 genes?
Critical Rationalist:
There are competing definitions of "genes". I found one article which said "the study hypothesized that some X chromosomes contain a gene, Xq28, that increases the likelihood of an individual to be homosexual."
Critical Rationalist:
Maybe that article had a different definition of gene, maybe it was a simple mistake.
curi:
what definition of gene are you using, and what is a competing definition that you disagree with?
Critical Rationalist:
I have no opinion on the definition of gene. I will use whatever definition you want me to.
Alisa:
https://en.wikipedia.org/wiki/Xq28
Critical Rationalist:
It makes no difference to the content of the prediction whether we define Xq28 as one gene or 11 genes.
curi:
you aren't reading carefully even though i'm talking about details. that's inappropriate to productive discussion
curi:
no one said Xq28 had 11 genes.
Critical Rationalist:
@curi my point is that it does not matter how many genes are in Xq28.
curi:
my point is that you were factually mistaken. i think getting facts and statements correct matters. you don't seem interested. i regard this as an impasse.
Critical Rationalist:
@curi this is not an impasse (in the sense of a deadlock in debate). However many genes you think are in Xq28, I will grant that fact to you.
Alisa:
That is not responsive to his point that you were mistaken and that getting facts and details correct matters.
curi:
the impasse isn't the number of genes in Xq28. i don't think you understood what i said. your repeated misreadings of what i say, along with lack of clarifying questions or interest, is a second impasse.
Critical Rationalist:
What is the impasse? Please explain it to me.
Alisa:
you were factually mistaken. i think getting facts and statements correct matters. you don't seem interested.
curi:
your disinterest in focusing on making correct statements and caring about errors in them.
Critical Rationalist:
Ok, I also want to make correct statements. If you know how many genes are in Xq28, I will be happy to find out (so I can be correct).
curi:
do you agree that you made an error?
Critical Rationalist:
This particular fact (how many genes are there) does not have relevance to the debate (unless you can show otherwise). But I agree that correct statements are better than incorrect ones.
Critical Rationalist:
Which statement of mine was an error?
curi:
that's a yes or no question.
Critical Rationalist:
If you can show me which statement was an error, I'll agree.
Critical Rationalist:
I am a fallibilist, so I expect that sometimes I will make errors.
curi:
that's not an answer to the question. your unwillingness or inability to understand and answer questions is an impasse.
JustinCEO:
Taking a position on whether you made an error is sticking your neck out. If you're wrong in your evaluation that would warrant further analysis re why you missed that error
Freeze:
are these impasse chains in action?
Critical Rationalist:
Ok, I don't think I made an error.
Critical Rationalist:
Show me one, and I'll concede that I made one.
curi:
@Freeze not exactly, no clear chains
Freeze:
ah
Alisa:
is Xq28 a gene?
yes
For one
Freeze:
just unconnected impasses
curi:
4:40 PM] curi: is Xq28 a gene?
[4:41 PM] Critical Rationalist: yes
Freeze:
still interesting
Critical Rationalist:
Some people define it as a gene.
curi:
what definition of gene are you using, and what is a competing definition that you disagree with?
Critical Rationalist:
But actually the more reputable sources (from my glancing) define it as having more genes,
Critical Rationalist:
So sure, I concede that was an error. It is actually many genes.
Critical Rationalist:
A "gene complex"
curi:
why did you change your mind even though i didn't give new information?
Critical Rationalist:
Because you pointed out an error that I made.
curi:
i don't think you understood my question
curi:
when you say "you pointed out an error that I made" you seem to be referring to me giving you new information, contrary to the question.
Critical Rationalist:
I couldn't think of any error I made.
Critical Rationalist:
Alisa pointed one out. And I double checked the sources, and confirmed that it was an error
curi:
You forgot about the issue of whether Xq28 is a gene when evaluating and making a claim re whether you had made an error?
Critical Rationalist:
I also wasn't sure earlier because one source said it was a gene
Critical Rationalist:
But the more reputable sources said it was multiple genes
Critical Rationalist:
So I now concede that it was an error
Critical Rationalist:
These are all fair things to be saying.
curi:
i've asked a yes or no question. i'm still waiting for an answer.
Critical Rationalist:
No, it was not in my mind when you asked about whether I had made an error.
curi:
do you mean "yes"?
JustinCEO:
curi:
You forgot about the issue of whether Xq28 is a gene when evaluating and making a claim re whether you had made an error?
Critical Rationalist:
Sorry, yes.
curi:
it's hard to organize and make progress in discussions with frequent errors. because you're talking about one thing and then an error comes up, and you talk about that, an another error comes up. this can happen a lot if the rate of errors is faster or similar to the rate of error corrections. does this abstract issue make sense to you?
Critical Rationalist:
Yes, the abstract issue makes sense to me. I concede the error, and agree that errors make conversations harder. I said Xq28 is one gene when it is in fact many genes. You are free to continue with any line of argument you had.
curi:
ok. i appreciate that. many people quit around here if not earlier.
it's hard to answer some of your complicated, bigger picture questions and points, in a way that satisfies you, when communication about some of the smaller chunks is breaking down often. that's my basic answer re AWALT. does that make sense?
this discussion community has been trying to examine issues rigorously for 25 years. it has developed some complicated ideas about how to do that. if you're interested in learning the methodology, that'd be great. if not, it's possible to have discussions but expectations have to be lower. do you think that's fair?
Critical Rationalist:
I think this will be my last comment for the night. Given that I only have two days left before I leave my family for Georgia, it might be my last comment for a while. Here is why I do not think that is fair. Debates about evo psych have also gone back decades (longer than 25 years). There are also complicated ideas about how to do that (in fact, more complicated: they involve statistical analysis and genetics). Despite the fact that t7he debates about evo psych theories have been going on longer, and have more complicated methodologies, I was still able to explain (in plain English) what observations would falsify specific evo psych theories. I think it is reasonable to expect you (as a Popperian) to be able to do the same. You have a hypothesis (no women are immune to PUA) and you have been unwilling to explain what data would falsify it. You have said (and I agree) that some PUA hypotheses are testable, but I started this conversation by contesting that particular claim (i.e. the AWALT claim). I don't think there are any evo psych hypotheses for which I could not explain (in plain English) what evidence would count as falsification of the hypothesis. But if there were, I would just admit "yes, that particular hypothesis is not falsifiable". I am not claiming in any way to have "won the debate". I view this more as a conversation. I am merely saying that @curi held me to a different standard. He extensively criticized my examples of how to test evo psych hypotheses, but was unwilling to give his own example of how to test the hypothesis which was the subject of debate. I could not even begin criticism of his position, because he flatly refused to answer the crucial question.
curi:
your explanations re evo psych contained errors which have not yet been untangled, so you did not yet succeed at doing that.
curi:
that = " was still able to explain (in plain English) what observations would falsify specific evo psych theories."
curi:
you're also comparing research into evo psych using standard methodology with research into discussion methodology. and doing it after i just gave several demonstrations of how your discussion contributions were inadequately rigorous, hence my suggestion better methodology is needed to deal with that ongoing problem.
curi:
the standard i was trying to hold you to was not being mistaken. i do hold myself to that too.
curi:
i did not agree to debate AWALT with you (you call it the subject of the debate) and you didn't seem to listen to me about that.
JustinCEO:
CR seems more interested in showing curi has some purported double standard than in trying to achieve mutual understanding
curi:
AWALT is an all X are Y claim, similar to "all swans are white". you can test it by looking for counter examples. in order to judge what is a counter example you have to learn and use the redpill/PUA theoretical framework to interpret the data. i don't know a simple summary to redpill a bluepill person in a couple paragraphs so that they could do that, especially not when they're argumentative and not asking questions to learn about PUA.
curi:
the data is much messier than physics b/c e.g. no PUA has a 100% success rate
curi:
so 10 guys can try to get a girl using their flawed PUA, all fail, and that doesn't imply she's a NAWALT
curi:
this is dangerous b/c ppl could make endless excuses to get rid of counter examples, as CR said. nevertheless it's the situation. i asked if he knew of that danger happening but he didn't. which makes sense because he's unfamiliar with the literature and not in a position to join the AWALT debate.
curi:
AWALT is not 100% rigorously defined. worse, it's considerably less airtightly specified than many other existing ideas. nevertheless it does have some content, and if data started clashing with it in big ways the reasonable people would start changing their mind.
curi:
people mean stuff by it that has limited flexibility
curi:
but no single field report could refute AWALT
curi:
no more than observing one family for one day could refute the idea that they are coercive parents.
JustinCEO:
do lesbians use PUA?
curi:
no idea
JustinCEO:
i wondered cuz lots of lesbian relationships fall into gendered patterns where there's like the boy lesbian and girl lesbian
JustinCEO:
so i was wondering if it'd work for the boy lesbians
curi:
you could try to RCT whether PUAs have better pickup results on average than ppl without PUA training, but that won't tell you whether AWALT or NAWALT.
curi:
you can't directly test whether a particular woman is a NAWALT b/c any number of PUA attempts failing on her is compatible with AWALT
curi:
that doesn't mean those failures would be meaningless. we'd try to come up with explanations of the data.
curi:
it could indicate e.g. a systematic error in PUA training that many PUAs fail on that women. which would be unsurprising. no one thinks PUA is perfect as understood today. the issue is whether that kind of stuff works.
curi:
CR was uninterested in the problem situation this debate stems from
curi:
which is ppl actually want to find a NAWALT and other ppl think it's a hopeless quest
curi:
this has consequences like MGTOW, which believes AWALT and consequently rejects women
JustinCEO:
ya i mentioned that earlier i think re: wanting to find NAWALT
curi:
the actual nature of the debate is kinda like, stylized:
MGTOW: u'll never find a unicorn, RIP
Joe: my gf is GREAT, why u dissing her? i totally understand that redpill is right in general and > 90% of girls are like that, but she's special, just look harder
MGTOW: link me her facebook
Joe: ok
MGTOW: here are 8 examples of AWALT behavior i found on her wall
Joe: fuck you
curi:
then, after consistently dealing with challenges like this, CR comes along and says AWALT theory is not subject to empirical testing.
JustinCEO:
i think if u assume PUAs are like misogynists or something (which is a conventional view) you would have the opposite expectation, that they want to say AWALT
curi:
b/c it's hard to tell him how to find AWALT behaviors on an FB page
curi:
there's no simple formula for that
curi:
i can't write a bot to scrape that data
curi:
i can't get that data from a survey
curi:
it takes creative, critical thinking
curi:
note this debate is btwn ppl who think redpill is 99% right and ppl who think 100%, NOT btwn ppl who think redpill is 50% right or 5% right or 0% right. the debate with them is different. CR didn't seem to understand this when i explained earlier. but then blames me for not being able to give a short explanation, just cuz he didn't understand the one i gave? meanwhile he did not give one that satisfied me, but claimed asymmetry b/c he gave one!
curi:
anyway the big thing, to me, is he makes lots of mistakes, he admits he makes lots of mistakes, he ought to be super interested in talking with someone who can catch and correct his mistakes (and who he can't do that to, as yet). but it's not clear that he is.
curi:
curi:
and now he's leaving, probably for a while, without trying to do those things or explain alternatives or concede he has a lot to learn and express interest in learning it.
curi:
[4:26 AM] GISTE: Before I address your question, I have a point to make and a clarifying question about what you said:
(1) I think you’re implying that all of your previous comments are compatible with Popperian epistemology. I’ve been reading your comments and I disagree with many of them re epistemology. So that means that you and I disagree on what Popperian epistemology really is, how it works, and how it applies to the non-epistemology topics we’re discussing.
(2) To clarify, are you saying that you have to look at data (observe) before coming up with a theory? @Critical Rationalist
[4:31 AM] Critical Rationalist: I don’t hear a question from 1).
This is a misreading by CR. GISTE clearly stated that he had a point and a question, then provided a point and a question. CR assumed not only without it being said, but directly contrary to the text, that it would be two questions.
curi:
[5:00 AM] GISTE: (1) You’ve seen me disagree with Popper on stuff re epistemology, so I don’t get the “sacred text” comment. (Recall that we talked about Popper’s critical preferences idea and I gave you a link to a curi blog post that explains that Popper’s idea is wrong and incompatible with the rest of Popperian epistemology, while curi’s correction to that idea is compatible with the rest of Popperian epistemology.)
(2) Ok. I recommend that you engage with @curi or @alanforr about this because they are experts on this and I’m not. For now I’ll explain something that I’m not sure will help you understand my view. (This is my vague memory and these are not actual quotes.) Popper once gave a lecture where he said to his students “Observe”. The students said, “observe what?” Popper replied with something like, “Exactly, you have to have an idea (theory) about what to observe before you can observe”. This was to point out that theory always comes before observation.
(3) selective pressures cannot “give rise” to anything. I tried to come up with an interpretation of your question that makes sense from my perspective (which includes my understanding of epistemology) but I did not succeed. I could try to come up with a question that tries to get at what I think you’re trying to get at, and then answer that question. So here’s my question: what selective pressures could have possibly selected for the genes that made flying dinosaur bones lighter? Answer: flying dinosaurs that had genes that made their bones lighter resulted in those dinosaurs being able to fly more, higher, longer, etc, which resulted in those dinosaurs having more grandchildren than compared to the dinosaurs that had rival genes.
@Critical Rationalist
[5:04 AM] Critical Rationalist: I agree, the “sacred text” comment was unnecessarily provocative. The passage you cite is roughly what I had in mind.
This is an error because GISTE did not cite a passage.
curi:
[6:18 AM] Critical Rationalist: Children can only learn language during a certain period of time. If they try to learn a language after a certain age, it is virtually impossible to attain full fluency. Furthermore, learning a language as an adult is incredibly effortful, whereas doing so as a child is effortless.
How do you know it's effortless for children?
The reason you think this data contradicts my view is that you don't know what my view is. You're trying to argue with it before understanding the basics. This isn't an issue we overlooked.
These data seem best explained by specialized language acquisition capacities (which only function for a limited time), not a general learning capacity.
this claim contradicts some theories in epistemology, which are in BoI, which CR hasn't learned or found any flaw in. if theory and data are incompatible you have to say "i don't know", but the data is compatible, the only issue here is the theory-violating explanation of the data seems more intuitive.
curi:
[6:30 AM] GISTE: AFAIK = as far as i know
[6:31 AM] Critical Rationalist: Lol typed it into google incorrectly
here CR thinks making an error is funny.
Measure the degree of corruption by society (however you define it) and see if if predicts the difficulty of learning language.
[9:19 AM] Critical Rationalist: Are you willing to put your money where your mouth is and make that prediction?
CR doesn't understand the things he's trying to argue with. you can't just measure that. our concept doesn't map to a measuring device. he's dramatically underestimating the complexity of the human condition by proposing (in later messages) very naive, simplistic proxies for corruption which are very dissimilar to our thinking on the matter.
more broadly he's dramatically downplaying the role of philosophy and critical thinking compared to KP and DD.
curi:
[9:54 AM] Critical Rationalist: Both my theory and his lack theoretical specificity
this comment on me comes from misreading what i actually said. he's glossing over the details and specifics of the points i made. could go through it in detail but he won't thank or reward me, or start trying to learn FI.
[10:26 AM] Critical Rationalist: There is no account of how a universal classical computer could creatively conjecture new explanations
KP gave one. P1 -> TT -> EE -> P2. Also known as "evolution" or "conjecture and refutation". that doesn't mention computers. is the problem/objection related to some imagined limit of computers? what?
Critical Rationalist:
@curi as I said, I’ll be stepping out for a while. I’ll just say one thing. Since you’re holding my words to a very high standard, it is only fair for the same standard to be applied to you.
Critical Rationalist:
The reason you think this data contradicts my view is that you don’t know what my view is.
Critical Rationalist:
Did I say that this data contradicts your view?
curi:
Freeze:
"very powerful evidence against curi's account"
Freeze:
the evidence is the data?
Critical Rationalist:
Does “powerful evidence against” mean the same thing as “contradicting”?
Freeze:
i think so
curi:
do you think that data is compatible with my account? why, then, would it be very powerful evidence against? i myself think that the data, as you present it, refutes my account.
Critical Rationalist:
I do not think data needs to logically contradict a theory to be evidence against it. The point is that you misrepresented what I said. I elsewhere explained that what I meant was that the data are better explained by an alternative model.
Critical Rationalist:
That might not be your epistemology, but you made an error when presenting my position.
curi:
in the quote, i didn't make a statement about what you said.
Critical Rationalist:
You presented my position. You said > The reason you think this data contradicts my view is that you don’t know what my view is.
curi:
since your presentation of your data does contradict my account (IMO), and you thought it was strong evidence against, and Critical Rationalism considers evidence against something to be contradicting data, and you said you were a Critical Rationalist, i made a reasonable guess given incomplete information.
Critical Rationalist:
Ok, but it was an error nonetheless
curi:
no, making a reasonable guess using incomplete information is not an error. it's a correct action.
Critical Rationalist:
You are presupposing an incorrect definition of error. Error means mistake or false statement
Critical Rationalist:
Error: “the state or condition of being wrong in conduct or judgment.”
curi:
was my conduct wrong?
Critical Rationalist:
No, your statement was wrong
curi:
was my judgment wrong, meaning i should have made a different judgment in that situation?
Critical Rationalist:
Wrong as in factually wrong, not ethically wrong
JustinCEO:
That's the very definition I would have chosen to contradict u CR
Critical Rationalist:
No, it was wrong in the sense that it was factually incorrect
curi:
so i didn't make a conduct or judgment error?
Critical Rationalist:
The first definition of wrong is “not correct or true; incorrect”
Critical Rationalist:
Your statement was incorrect, therefor it was an error
JustinCEO:
The very definition that you first chose doesn't talk about factual correctness
curi:
you're moving the goalposts
curi:
and what do you think is evidence against a theory which doesn't contradict it? how does that work?
Critical Rationalist:
Furthermore, if we accept your definition of error, then my claim that Xq28 was a single Gene was not an error: it was a reasonable guess based on incomplete information (I looked at a source which said it was a gene)
curi:
i don't agree
Critical Rationalist:
That’s besides the point
Critical Rationalist:
The point is, your statement was incorrect. It was an error
curi:
why did you manage to find a source that's worse than wikipedia or reading link previews on google?
JustinCEO:
CR imho u r scrambling badly while trying to catch curi out
curi:
seems like an error
JustinCEO:
You should be less adversarial
curi:
and why did you double down on it by making a claim re differing definitions of gene while being unable to provide any definitions?
JustinCEO:
Night
Critical Rationalist:
I’m not revisiting it in detail
curi:
i think if i restate something you communicated, and then you call it factually false, the error is yours for communicating it, not mine for talking about your views in terms of what you said.
curi:
further, you're claiming i'm factually wrong but have yet to explain the real state of affairs, as you claim it to be, which differs from what i thought it was.
Critical Rationalist:
Then the same is true for you
JustinCEO:
There he goes again
curi:
i haven't yet explained that Xq28 is more than one gene?
Critical Rationalist:
You accused me of not understanding your view. In that case, the fault is yours
Critical Rationalist:
If we use the same standard
curi:
where did i miscommunicate?
Critical Rationalist:
Where did I miscommunicate?
curi:
i told you where i got my interpretation of your position. you have yet to point out any error in my way of reading.
curi:
did you forget?
Critical Rationalist:
I did not forget. It is a rhetorical question. I do not believe that I miscommunicated
curi:
i gave an account which you have not responded to
curi:
so that's an asymmetry
curi:
asking where you miscommunicated, while remembering that i already told you and it's pending your reply, is unreasonable
Critical Rationalist:
I believe that data can decide between two theories when one theory predicts it, but the other does not.
Critical Rationalist:
That is not the same as the data contradicting the latter theory, but it does constitute evidence against it
curi:
i think you're too tilted to continue, and are just trying to win a pedantic point to save face because you lost a bunch of points, and that you can't actually win but are just going to keep throwing nonsense at me without regard for the quality of your arguments, and this is an impasse.
Critical Rationalist:
No, I just explained what I mean by evidence being against a theory without contradicting it
Critical Rationalist:
Which is what you asked for.
curi:
asking where you miscommunicated, while remembering that i already told you and it's pending your reply, is unreasonable
curi:
among many other things
Critical Rationalist:
Alright, this will actually be my last comment. The reason i did this little exercise is because your own accusations of errors are levied against me when they were clearly good faith misunderstandings. For example, I admitted that I mistyped something into google and you called this an error. I am showing why that approach is problematic. I think you’re projecting. You accusing me of being combative is odd coming from someone who criticized me for mistyping something into google.
curi:
asserting they were "clearly good faith" is an unreasonable way to speak to me. you can't reasonably expect me to agree with that.
Critical Rationalist:
Do you think I mistyped something into google in bad faith (I was referring to the errors you pointed out in your volley, eg when I mistyped something into google, or when I said giste “cited” something when he only alluded to it. Those were clearly not in bad faith).
curi:
someone who criticized me for mistyping something into google.
i didn't do that. you're lost b/c you keep misreading things and getting facts wrong. then you build conjectures using those errors.
curi:
Alright, this will actually be my last comment.
false
curi:
(I was referring to the errors you pointed out in your volley,
you didn't specify a limit on which errors from today you meant.
curi:
i was criticizing you for laughing, not for the typo.
curi:
i was criticizing your attitude not your mistyping. again you're too tilted, incompetent or whatever to read.
curi:
that's common and fixable if you want to work at improving. it takes effort to gain skills. but it doesn't sound like you want to make progress.
curi:
re epistemology, does he mean that observing my desk is powerful evidence against evolution, which did not predict it? or only if i propose a theory of intelligent design which includes a prediction of my desk?
curi:
i wonder why he thinks "effortless" learning doesn't contradict my model. does he know that contradicts Popper?
curi:
he thinks my model merely fails to predict that some learning will be effortless? odd misconception.
curi:
conjecturing and refuting is effort.
curi:
there's no actual data that anyone learned anything effortlessly.
curi:
he was ignoring that my model interprets the data differently
curi:
1:20 PM] Critical Rationalist: Let me try to spell out the contradiction with a concrete example
curi:
there's also the dictionary meanings
curi:
curi:
curi:
curi:
I'm not contradicting you, I'm just saying you're totally wrong. - CR, 2020
curi:
A general learning capacity would work equally well through the life span, but language acquisition works optimally during a particular period of life
isn't he saying: curi's model would predict X, but the data is Y. isn't he referring to contradiction?
curi:
i still read this as as a misprediction issue where my model allegedly differs from empirical reality, and i think he was being dishonest to try to catch me in an error.
curi:
he wasn't talking about something where my model has no predictions, so that was an unreasonable elaboration. he gave a case which, besides the direct problems with it, doesn't apply here.
curi:
he had just stated a prediction himself (which is correct as a first approximation, though fails to consider some factors)
curi:
it was a poor claim about what my model predicts, but he did make such a claim and contradict it.
curi:
right after mentioning something, which i highlighted, that does contradict my model (the idea of effortless learning, which tbh i don't think any serious school of thought claims).
curi:
i don't think he thought his point through beyond his initial statement that he hadn't said contradict, and i said contradict
curi:
but he wasn't even paying enough attention to notice i didn't say he said that word.
curi:
i was describing his thinking, not making statements re his word use
curi:
note that none of the errors he made were rescuable by saying e.g. "oh i was speaking loosely, and reasonably, and meant..."
curi:
no additional clarifications of his statements would help them
curi:
they were actually wrong
curi:
it wasn't stuff like typos where he'd say "oh i didn't mean that, that text doesn't represent the ideas in my head perfectly"
curi:
they were all substantive thinking mistakes
curi:
he's partly trying to smear my criticism by making low quality criticism and then calling it parallel.
curi:
i wasn't trying to hurt him by correcting him about several things in a row. in retrospect i did hurt him. i avoided those sorts of corrections for quite a bit of discussion b/c i know most ppl dislike them and can't handle them, and he broadcast plenty of the usual signs that he would dislike it. however, he kept pushing me in picky ways, trying to get more details, etc. he was basically bluffing aggressively by pretending he wanted that sort of discussion to pressure me. he thought it was a game of chicken whereas, actually, i simply can discuss carefully and rigorously.
curi:
he pretended he was OK with it at first, and pretended it had been successful, but after these later comments he clearly wasn't.
curi:
he interpreted correction re social status and wanted to do this back to me:
curi:
He had turned to go. Francon stopped him. Francon’s voice was gay and warm:
“Oh, Keating, by the way, may I make a suggestion? Just between us, no offense intended, but a burgundy necktie would be so much better than blue with your gray smock, don’t you think so?”
“Yes, sir,” said Keating easily. “Thank you. You’ll see it tomorrow.”
curi:
- FH
curi:
but i didn't want to let him b/c he accused me of an intellectual error instead of using something unimportant to save face with
curi:
he wanted to save face in a more substantial way that denied the meaning of what had happened, as well as detracted from my intellectual reputation, whereas Francon didn't do that, he was just saying he's not a total pushover and he's still the boss.
curi:
both of which are true
curi:
anyway i didn't offer him a way out where he gets to be a competent person capable of rigorous intellectual discussion with an adequately low error rate to make progress. i don't think he's there yet. but he's too attached to already being there to try to fix it, so he's maf.
curi:
by trying to tear me down he was trying to show my criticisms were trivial and unimportant, no one is immune to that standard of pedantry, no one lives up to the standards of competence i propose, etc.
curi:
but when he tried to have that discussion, he was tilted to the point of making a lot more errors than before
curi:
and his judgment of what point he could safely win was grossly unreasonable
curi:
b/c he wasn't updating his thinking regarding the new info he had. he just kept trying to do what worked in the past.
curi:
sadly his career is posturing and social climbing re this stuff, he's really invested in that game
curi:
mb he'll come back and say i'm making erroneous assumptions, he's going to be a rich socialite, the phil MA with TA work is just a hobby
curi:
the thing i was actually trying to communicate re his thoughts was something i thought his perspective (as judged by his msgs) was not taking into account.
curi:
when he said Xq28 is a gene, and doubled down on it, he was trying to say it is in fact a single gene. which is wrong.
curi:
he was saying this in service of his claim that he was speaking strictly correctly
curi:
he chains his errors together – defending each with a new one
curi:
they aren't random. they're systematically biased
curi:
ppl don't like being outclassed. it's so fukt. i did like it when i talked with DD initially.
curi:
he's still in school and i've been a professional philosopher for a long time, and i have the best education/credentials in the field, but he can't take losing to me. he can only take (maybe) losing to ppl who he perceives as higher social status than he perceives me.
curi:
he did not discuss his social status judgments and their accuracy or relevance
curi:
the alleged asymmetry re AWALT and evo pscyh was interesting
curi:
i gave a short statement which he didn't accept. he gave one that i didn't accept.
curi:
the asymmetry was that i accepted that he hadn't accepted mine, and talked about how to solve this problem, how to make progress, what can be done. meanwhile, he did not accept that i hadn't accepted his.
curi:
so his ideas are better than mine because he denies reality.
curi:
he repeatedly tried to invoke this asymmetry, as if i'd accepted his examples in some significant way, when i hadn't.
curi:
he like couldn't face that his short, simple summary info was not convincing to me.
curi:
it works on everyone else!
curi:
despite the fact that he doesn't know the basic facts of the topic
curi:
which are, in his experience, not relevant to getting most ppl to agree that he's clever.
curi:
he thinks everything in evo psych is readily testable. but how would you test whether being more attracted to men in general leads to more children? survey questions will not measure degrees of attraction accurately. how does anyone know how their attraction levels in their head, on average, compare to those of other people? his general policy, which we saw re measuring mental corruption, was just use terrible proxies to measure things cuz testing > not testing.
curi:
it's bad enough trying to survey to accurately measure a mental state that we have no good way to quantify. it's much worse trying to get people to make relative comparisons between their mental states and other people's non-quantified mental states.
curi:
when we quantify attraction normally we do it relatively to our own experience. i was much more attracted to sue than sarah.
curi:
ppl will pick words to communicate. they will say "i am super attracted to Nadalie". but this reflects 1) relative comparisons to their other attractions 2) social incentives to brag about this, play it up or down, etc. 3) how much they use strong terms in general. and, ok, 4) some crude estimates re behavior. e.g. they were willing to put effort into a date, so they should be using stronger language than someone who isn't putting in effort. roughly like that.
curi:
these behaviors are affected by tons of factors other than attraction.
curi:
including: attraction can result in putting in less effort b/c of playing hard to get
curi:
this also all neglects different types of attraction. treats it as a single trait which it's really not.
curi:
this was covered in BoI re happiness
curi:
The connection with happiness would still involve comparing subjective interpretations which there is no way of calibrating to a common standard
curi:
etc
curi:
So how does explanation-free science address the issue? First, one explains that one is not measuring happiness directly, but only a proxy such as the behaviour of marking checkboxes on a scale called ‘happiness’. All scientific measurements use chains of proxies. But, as I explained in Chapters 2 and 3, each link in the chain is an additional source of error, and we can avoid fooling ourselves only by criticizing the theory of each link – which is impossible unless an explanatory theory links the proxies to the quantities of interest. That is why, in genuine science, one can claim to have measured a quantity only when one has an explanatory theory of how and why the measurement procedure should reveal its value, and with what accuracy.
curi:
but he reads BoI, likes it, doesn't notice it contradicts a field he likes, doesn't notice the field in general has no rebuttal, and then is surprised when a DD colleague doesn't make concessions re his claims about it
Critical Rationalist:
There is a lot to talk about in your last volley, including some very important issues related to philosophy of science. May is when my upcoming semester in grad school ends. When I come back, I might return to those issues.
But there is one distinction I want to make. It will be helpful when you and I have future conversations. There is a difference between not addressing something and refusing to address something. For example, you said “he did not discuss his social status judgments and their accuracy or relevance”. This is me not addressing something. I agree that there I things I did not address.
However, this is normal. For example, here are is one question of mine that you never answered:
The link between the theory and prediction is called an auxiliary assumption. Do you know what an auxiliary assumption is?
Make a theory, use some auxiliary assumptions (you still have not indicated if you understand what these are) to form predictions, then test the predictions.
Now, if I had failed to answer a question two times in a row, you would have been very critical of me. But again, that is still just not addressing something. When you failed to answer my question about auxiliary assumptions, I decided to be charitable and assume you had just not gotten around to it (you are free to answer now if you want). I would never criticize someone for simply not addressing something (as you did with the auxiliary assumption question). In a conversation this complex, people will sometimes get sidetracked, or other things happen.
It is not reasonable to condemn someone for not addressing something. What is reasonable is to expect people to not flatly refuse to address something. A blanket refusal to answer a question (i.e. a statement to the effect of “no, I will not answer your question”) is a hindrance to progress in a conversation. Crucially, at no point did I do this.
jordancurve:
It is not reasonable to condemn someone for not addressing something.
Unless I missed it, you didn't quote anyone doing this.
curi:
https://my.mindnode.com/tvuTuLmRpf7YbREDvBAhKDoFvi4wkBcPfDXje3bB @Critical Rationalist (should work on desktop. if on android, ask for a pdf export. if on ios, download the free mindnode app and open in that)
jordancurve:
I think it would be clearer to refer to him as CRist and reserve CR for critiical rationalism.
curi:
did i refer to him as CR?
curi:
oh the title
curi:
i didn't even think of the filename as something that woudl be shared
curi:
it's not part of the tree
Critical Rationalist:
I was trying to explain that evo psych makes testable predictions. How does would it help my case if Xq28 were a gene instead of a series of genes? I grant that it is a set of genes. Does that show that evo psych is not making testable predictions? If not, what does the fact that Xq28 is a set of genes show?
curi:
that is non-responsive to BoI c12
curi:
it's also non-responsive to the biased errors problem
Critical Rationalist:
How is it a biased error?
Critical Rationalist:
Does this error favour my side?
curi:
it says how in the tree
Critical Rationalist:
@curi did you understand my distinction between "not responding" and "refusing to respond"?
curi:
yes
Critical Rationalist:
I read the purple part of the tree.
Critical Rationalist:
I did say when explaining the evo psych theory that it talked about a specific gene. It in fact was about a set of genes. But that is still a testable prediction. It doesn't help my case to say it is one gene: saying "a set of genes" is still a testable prediction.
Critical Rationalist:
that is non-responsive to BoI c12
Critical Rationalist:
I agree. I haven't responded to that yet, just like you have not responded to the auxiliary hypothesis question. Note again the difference between "not responding" and "refusing to respond".
Freeze:
I think non-responsive in this context means something more like, This doesn't address the arguments that criticize it or offer better explanations
curi:
@Critical Rationalist did you delete messages from the log?
Critical Rationalist:
I deleted one of my messages that said "my last mistake"
curi:
Please don't delete anything here
Critical Rationalist:
Sounds good
Critical Rationalist:
I await a response to my above messages.
curi:
https://elliottemple.com/debate-policy
Critical Rationalist:
Since @curi has shared that tree here, I will say what I said in "Slow". I was trying to explain that evo psych makes testable predictions. I said this to @curi
Critical Rationalist:
I did say when explaining the evo psych theory that it talked about a specific gene. It in fact was about a set of genes. But that is still a testable prediction. It doesn't help my case to say it is one gene: saying "a set of genes" is still a testable prediction.
Critical Rationalist:
@curi has not responded in "slow". So I'll ask the question again here.
Critical Rationalist:
How does would it help my case if Xq28 were a gene instead of a series of genes? I grant that it is a set of genes. Does that show that evo psych is not making testable predictions? If not, what does the fact that Xq28 is a set of genes show?
jordancurve:
How does would it help my case if Xq28 were a gene instead of a series of genes?
It would help the case that you are familiar enough with the topic to discuss it without making blatantly false statements.
jordancurve:
Does that show that evo psych is not making testable predictions?
No, that's in BoI ch. 12.
jordancurve:
what does the fact that Xq28 is a set of genes show?
See above.
Critical Rationalist:
@jordancurve Does it have any relevance to my claim that evo psych makes testable predictions? What matters is not how familiar or smart I am, what matters is the ideas I put forward.
jordancurve:
Does [the fact that Xq28 is not a gene] have any relevance to my claim that evo psych makes testable predictions?
jordancurve:
Not that I know of.
Critical Rationalist:
The claim that evo psych makes testable predictions is what I was arguing for.
Critical Rationalist:
So you don't know of any way that my error was relevant to that^ claim.
jordancurve:
No, and I don't think anyone said your error was relevant to that claim.
Critical Rationalist:
In slow, this conversation happened
Critical Rationalist:
I asked this:
Critical Rationalist:
How is it (my gene mistake) a biased error?
Does this error favour my side?
Critical Rationalist:
@curi said this
Critical Rationalist:
it says how in the tree
jordancurve:
Indeed.
Critical Rationalist:
That was a direct response to me.
Critical Rationalist:
So, he thinks that this error favours my side.
jordancurve:
Yes.
jordancurve:
One of your "sides", to be more precise.
Critical Rationalist:
Please explain.
jordancurve:
It says so right in the purple node of the tree!
jordancurve:
Do you want to try to re-read it once more before I explain it?
Critical Rationalist:
But it does not favour my side in the sense that it shows that evo psych is testable.
jordancurve:
No it doesn't, but no one (except you?) thought it did
Critical Rationalist:
Xq28 is a set of genes. Granted. Does that mean evo psych isn't testable?
Critical Rationalist:
Does that count against my claim that evo psych is testable?
jordancurve:
I think I answerd this earlier. No. That argument comes from BoI ch 12
Critical Rationalist:
Good.
jordancurve:
Not that I know of, but I'm no expert.
curi:
@jordancurve check IMs
Critical Rationalist:
So my error (claiming that Xq28 is a single gene, instead of a set of genes) does not count against my argument that evo psych is testable.
jordancurve:
Again, not that I know of.
Critical Rationalist:
The Boi chp 12 argument is an interesting argument, one that I'm willing to answer.
jordancurve:
It counts against your claim that you didn't make any errors.
Critical Rationalist:
Yes 100%
Critical Rationalist:
But surely, what matters is not me, but the ideas I'm putting forward.
jordancurve:
If you make a claim about yourself, then you matter.
Critical Rationalist:
We all agree, don't we, that the ideas are what matter?
Critical Rationalist:
Yes, I've retracted that claim.
JustinCEO:
Truth is what matters. Errors lead one away from truth and have to be dealt with in a serious and systematic way in order to get at the truth effectively. Concessions and retractions of errors are not a serious and systematic solution to the thing giving rise to the errors in the first place. The errors CR has made in the discussions with curi are not mere unavoidable byproducts of human fallibility and will sabotage making discussion progress if not rigorously and thoroughly addressed
curi:
https://curi.us/2190-errors-merit-post-mortems
Critical Rationalist:
"Second, an irrelevant “error” is not an error... The fact that my measurement is an eighth of an inch off is not an error. The general principle is that errors are reasons a solution to a problem won’t work."
Critical Rationalist:
That's from @curi's post.
Critical Rationalist:
So, by his standard, this error has to be relevant. It has to be "a reason a solution to a problem won't work". Why does my error qualify as relevant in @curi's sense?
jordancurve:
It's relevant to your claim about not having made an error.
curi:
you don't understand the standard in the post. this is another example of the same kind of lack of rigor that the xq28 error was
Critical Rationalist:
"The small measurement “error” doesn’t prevent my from succeeding at the problem I’m working on, so it’s not an error."
Critical Rationalist:
The problem I was working on was showing that evo psych is testable
curi:
is "is Xq28 a gene?" a problem?
Critical Rationalist:
It was not the problem I was working on, no.
curi:
when i asked that question, and you answered, you were not working on that problem?
Critical Rationalist:
The problem I was working on was "is evo psych testable"
Critical Rationalist:
Not on the problem "is Xq28 a gene".
Critical Rationalist:
That is not a problem I'm working on.
jordancurve:
!
curi:
so your answer that it's not a gene was not an attempt to solve the problem "is Xq28 a gene?"?
JustinCEO:
Problems have subproblems and you can make mistakes at the subproblem level and that affects your ability to claim you have solved the higher level problem
JustinCEO:
Like if I make an addition error in a complicated mathematical expression
JustinCEO:
Boom answer wrong
Critical Rationalist:
No, it was an attempt to solve the problem of whether evo psych is testable. I try to answer all questions when having conversation about a topic.
Critical Rationalist:
So, by your standard, the gene mistake does not qualify as an error.
Critical Rationalist:
Now look. I don't care what you call it.
Critical Rationalist:
Error, mistaken definition, whatever
JustinCEO:
Hang on nobody's conceded
So, by your standard, the gene mistake does not qualify as an error.
Critical Rationalist:
I was trying to argue that evo psych was testable.
Critical Rationalist:
That is the problem we were trying to solve.
JustinCEO:
Don't try to move on before that gets thoroughly resolved
Critical Rationalist:
The problem I was trying to solve was whether evo psych was testable.
Critical Rationalist:
Whether Xq28 is one gene or many genes does not affect THAT^ claim.
curi:
you clearly don't understand what the post means re problems and problem solving. so you haven't understood the standard in the post. that would be ok if you weren't then trying to use your misunderstanding as a bludgeon to win a debating point.
Critical Rationalist:
@curi the post does not define the term "problem" or "problem-solving". The word "problem" only occurs twice.
jordancurve:
It's written for people familiar with CR
Critical Rationalist:
The problem that I was trying to solve was this: "is evo psych testable".
Critical Rationalist:
I am familiar with CR
JustinCEO:
Why didn't CR ask something like "Ok then what am I missing?" re: the post and curi's comments about not understanding the standard
Critical Rationalist:
Because sometimes when I ask @curi a question he refuses to answer.
Critical Rationalist:
But I will try with this one, since you've recommended that I do so.
curi:
http://fallibleideas.com/problems
curi:
among many other things. your denial of subproblems or working on multiple problems at once is contrary to the mainstream, quite bizarre, and not something you can expect to be covered preemptively.
curi:
anyway you interpreted something i wrote, using your intellectual framework assumptions, to conclude basically that i was contradicting myself. the more reasonable conclusion is different framework.
JustinCEO:
Ya I found the replies in that vein shocking
JustinCEO:
Shocking re:
among many other things. your denial of subproblems or working on multiple problems at once is contrary to the mainstream, quite bizarre, and not something you can expect to be covered preemptively.
curi:
among many other things
i meant that the link is one of many pieces of literature.
curi:
I am familiar with CR
right you were familiar enough with CR to know that a question is a type of problem, but some of your other comments had nothing to do with CR
jordancurve:
Because sometimes when I ask @curi a question he refuses to answer.
Yesterday you made a similar claim ("When you [curi] don't answer a question, it makes you look bad") and yet, when challenged, you were unable to quote a single question that curi didn't answer. Has that changed?
Critical Rationalist:
When I first heard about this group, I was excited to talk with other people who were familiar with Karl Popper. Despite being in a masters program in philosophy, I rarely encounter people who know his work closely. But the quality of discourse is on the whole negative (though there have been some exceptions). You have been obsessing over the fact that I said Xq28 is one gene instead of many genes, despite the fact that it is not relevant to the problem I was trying to solve (is evo psych testable). @curi will criticize me for failing to address things (despite the fact that I try my very best to answer every question). When it is pointed out that everyone (including him) sometimes fails to address things, he ignores it. For example, this is the fourth time I have prompted you to answer this question: "do you know what an auxiliary hypothesis is?" And as I have already pointed out several times, when I challenged him to provide a testable prediction that followed from his theory, he refused to do so. He claims that this claim "no women are immune to PUA" is testable" has been subject to empirical tests. However, in order to be an empirical test, it has to be a genuine attempt at falsification. I read @curi's most recent volley on this topic. What a Popperian should be able to say for his theory is this "if we observe x, then the theory is falsified". In the case of Einstein, he could answer this question concretely: if we see the starlight here, then the theory is falsified. I could do this for evo psych: "if male homosexuals do not invest more in their nieces and nephews, then the theory is falsified".
curi:
you aren't using this method or proposing a different one https://curi.us/2232-claiming-you-objectively-won-a-debate
Critical Rationalist:
@curi said that "any number of PUA attempts failing on her is compatible with AWALT. that doesn't mean those failures would be meaningless. we'd try to come up with explanations of the data." This is exactly the strategy that Marxists and Freudians used (which Popper criticized). When Marxist and Freudian predictions did not come true, they would explain away the apparent falsification. They would systematically protect their theory from refutation. The way to avoid doing this is to specify in advance what observations would count as falsification. @curi has not said what observations would count as falsification. Until he does so, he cannot claim that his theory is testable in a Popperian sense.
Critical Rationalist:
This forum is no longer worth my time. I will be deleting my account. If any of you want to contact me for one on one discussion, please email me at [email protected]
jordancurve:
jfc
curi:
[redpill] rationalization hamster
curi:
he doesn't want to debate to a conclusion in an organized way. he just wants to declare victory and hide.
jordancurve:
C R, you didn't have to go out like that!
Critical Rationalist:
It is too bad. I heard from people who were glad I had joined this group.
curi:
after conceding he made a bunch of errors, and never establishing any error by me, his conclusion is not "wow someone who is better at not making errors than me, amazing!" (which is a part of how i reacted to DD initially), it's just to ignore all the objetively established facts and be [redpill] solipsistic
Critical Rationalist:
I had moments where I enjoyed it do.
Critical Rationalist:
But it is not longer worth my time.
curi:
got any paths forward to go with that?
curi:
if you're wrong, how will you find out?
Critical Rationalist:
Yes, finish my masters degree in philosophy (where peer review is a part of the process of writing, so errors are caught), and then pursue a doctorate degree. That is my path forward. I thought this would be a fun outlet. I was wrong.
Critical Rationalist:
I'm not directing this at anyone personally. You are all free to email me with questions or discussion topics.
curi:
that's not a path forward
JustinCEO:
How will you find out if you're wrong about your judgment of this group and whether it's worth your time
JustinCEO:
Why not try discussing a small discrete and less controversial issue to conclusion instead of giving up totally
Critical Rationalist:
I'll have to live with that. I have ways of spending my time that I know are productive.
jordancurve:
That doesn't sound very critical rationalist.
Critical Rationalist:
My hypothesis that this group is a good use of my time has been falsified by the evidence.
jordancurve:
lol sigh
curi:
there are arguments the ways you're spending your time are not only not productive but counter-productive. you have not refuted them nor cited any refutation, but wish to ignore them with no way to fix it if you're wrong.
jordancurve:
Well, C R, I wish you would just take a break. Don't delete your account. Maybe you'll want to say something else some day. Why not leave the option open.
jordancurve:
Okay, we have your email if we want to contact you in the mean time.
jordancurve:
Like people say "delete your account" but I've never seen someone actually do it.
curi:
2:20 PM] Critical Rationalist: The Boi chp 12 argument is an interesting argument, one that I'm willing to answer.
I guess that was a lie?
JustinCEO:
😦
curi:
his parting shot included further statements ignoring the existence of those arguments
curi:
as if the state of the debate was me not answering him, rather than us waiting for his answer
curi:
he seems to be criticizing me for admitting duhem-quine applies to AWALT, on the implied basis that he doesn't think it applies to evo psych. he should read more Popper!
curi:
you will notice he has no solutions
curi:
no ideas about how to solve this problem
curi:
no reading recommendatiosn to fix us
curi:
no discussion methodology documents he thinks we should try using
curi:
popper says we can learn from each other, despite culture clash, by an effort.
curi:
but he just gives up with ppl who are willing to try more and in fact are bursting at the seams with dozens of proposed solutions
curi:
but he won't read ours nor suggest his own
curi:
that's a big asymmetry
jordancurve:
My hypothesis that this group is a good use of my time has been falsified by the evidence.
Come on. Really? He has to know, when he's not tilted, that evidence admits of multiple interpretations. Observations are theory-laden.
curi:
that's a bitter social comment which means "these guys aren't adequately falsificationists like real CRs"
jordancurve:
He didn't even seem to try to establish that the rival interpretations of the evidence were false.
JustinCEO:
"fun outlet" sounds like maybe he wasn't expecting tons of pushback and crit, given conventional views on what's fun
jordancurve:
*any rival interpretations
curi:
that's one of his main rationalizations to preserve his pretense of self-esteem
curi:
he didn't quote any unfun msg
curi:
he wanted to use unsourced paraphrases to attack msgs
curi:
[redpill] nothing personal, teehee
JustinCEO:
What are the [brackets] doing there exactly
curi:
tagging the msg. i'm gonna write a blog post to explain
JustinCEO:
Okay 👌
curi:
expressing a redpill perspective is different than expressing something i fully agree with
JustinCEO:
Ah
curi:
but i think worthwhile to consider
curi:
a little like /s is not your usual voice
JustinCEO:
Rite
curi:
@curi has not said what observations would count as falsification. Until he does so, he cannot claim that his theory is testable in a Popperian sense.
does he not know enough about BoI c12 to know that's covered there?
curi:
if so, why did he say BoI c12 is interesting and he'd be willing to answer, as if he knew what it said?
GISTE:
CRist makes a particular mistake repeatedly. He thinks that an interpretation of data using one theoretical framework can be used as evidence contradicting another theoretical framework. He did this a bunch in the discussion about the BOI model of the human mind, and in the discussion about PUA/AWALT. we tried to explain his error many times, but he did not get it, nor did he ask about it, nor did he criticize it.
curi:
think he'll learn about and fix the error from his MA + the peer review process?
GISTE:
well those things are not focussed on finding and fixing mistakes, so i'd guess no.
GISTE:
if he did learn about and fix that error, it would be despite his MA + peer review process, not because of it.
curi:
https://curi.us/2278-second-handedness-examples#15054
curi:
there was something else he said about other ppl telling him to join or msging him about his participation here but i didn't find it when searching
curi:
https://curi.us/2279-red-pill-comments#15055
curi:
OT C R dared claim familiarity with red pill and PUA while not knowing what a neg is, or AWALT, or a bunch of other standard terms
curi:
similar to how he didn't finish either of DD's books but initially presented himself as a knowledgeable fan
curi:
he has really low standards for knowing about something
curi:
shit test? mystery method? AFC? no? what have you heard of? no answer.
JustinCEO:
think he'll learn about and fix the error from his MA + the peer review process?
Peer review in fields like Philosophy is currently more about signaling a certain sort of conformity in language and method than it is about error correction
JustinCEO:
And also
JustinCEO:
There's political stuff like eg:
Metaphysics, traditionally a highly abstract and impractical area of inquiry, is the area of philosophy that has had perhaps the most high-profile political scuffles in the past few years. This is because there are significant political overtones to questions about the nature of race and ethnicity, or the nature of sex and gender. The Hypatia affair, which I wrote about for this magazine two years ago, crystallized many of the dynamics surrounding these issues. My contention is not that questions about race/ethnicity and sex/gender are improper for philosophical inquiry, but that philosophical inquiry is threatened by the political fervor that surrounds these questions. In the debates between gender-critical feminists and their detractors (who call them “Trans-Exclusionary Radical Feminists”), for instance, it is often taken as a given that the political demands of feminism should determine our views on the metaphysics of sex and gender; at issue is which version of feminism is given pride of place.
JustinCEO:
https://quillette.com/2019/07/26/the-role-of-politics-in-academic-philosophy/
curi:
sex, gender, race and ethnicity are not metaphysical issues
curi:
philosophers so confused
curi:
1:58 PM] Critical Rationalist: I agree. I haven't responded to that yet, just like you have not responded to the auxiliary hypothesis question. Note again the difference between "not responding" and "refusing to respond".
curi:
2:05 PM] Critical Rationalist: I await a response to my above messages.
[2:07 PM] curi: https://elliottemple.com/debate-policy
curi:
hen it is pointed out that everyone (including him) sometimes fails to address things, he ignores it. For example, this is the fourth time I have prompted you to answer this question: "do you know what an auxiliary hypothesis is?"
curi:
i did answer right there
curi:
not the first time he confused 1) not liking my answer 2) me not answering