Aubrey de Grey Discussion, 15

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
A reason "strong refutation" seems to make sense is because of something else. Often what we care about is a set of similar ideas, not a single idea. A refutation can binary refute some ideas in a set, and not others. In other words: criticisms that refute many variants of an idea along with it seem "strong”.
That’s basically what I do. I agree with all you go on to say about closeness of variants etc, but I see exploration of variants (and choice of how much to explore variants) as coming down to a sequence of dice-rolls (or, well, coin-flips, since we’re discussing binary choices).
I don't know what this means. I don't think you mean you judge which variants are true, individually, by coin flip.

Maybe the context is only variants you don't have a criticism of. But if several won their coin flips, but are incompatible, then what? So I'm not clear on what you're saying to do.

Also, are you saying that amount of sureness, or claims criticisms are strong or weak (you quote me explaining how what matters is which set of ideas a criticism does or doesn't refute), play no role in what you do? Only CR + randomness?
The coin flips are not to decide whether a given individual idea is true or false, they are to decide between pairs of ideas. So let’s say (for simplicity) that there are 2^N ideas, of which 90% are in one group of close variants and the other 10% are in a separate group of close variants. “Close”, here, simply means differing only in ways I don’t care about. Then I can do a knockout tournament to end up choosing a winning variant, and 90% of the time it will be in the first group. Since I don’t actually care about the features that distinguish the variants within either group, only the features that distinguish the groups. I’m done. In other words, the solidity of an idea is measured by how many close variants it has - let’s call it the “variant density” in its neighbourhood. In practice, there will typically be numerical quantities involved in the ideas, so there will be an infinite number of close variants in each group - but if I have a sense of the variant densities in the two regions then that’s no problem, because I don’t need to do the actual tournament.
OK, I get the rough idea, though I disagree with a lot of things here.

You are proposing a complex procedure, involving some tricky math. It looks to me like the kind of thing requiring, minimum, tens of thousands of words to explain how it works. And a lot of exposure to public criticism to fix some problems and refine, even if the main points are correct.

Perhaps, with a fuller explanation, I could see why Aubreyism is correct about this and change my mind. I have some reasons not to think so, but I do try to keep an open mind about explanations I haven't read yet, and I'd be willing to look at a longer version. Does one exist?

Some sample issues where I'd want more detail include (no need to answer these now):

  • Is the score the total variants anywhere, ignoring density, regions and neighborhoods? If so, why are those other things mentioned? If not, how is the score calculated?
  • Why are ideas with more variants better, more likely to be true, or something like that? And what is the Aubreyism thing to say there, and how does that concept work in detail?
  • The "regions" discussed are not regions of space. What are they, how are they defined, what are they made out of, how is distance defined in them, how do different regions connect together?
  • The coin flipping procedure wouldn't halt. So what good is it?
  • I can imagine skipping the coin flipping procedure because the probabilities will be equally distributed among the infinite ideas. But then the probabilities will all be infinitesimal. Dealing with those infinitesimals requires explanation.
  • I'm guessing the approach involves grouping together infinitesimals by region. This maybe relies on there being a finite number of regions of ideas involved, which is a premise requiring discussion. It's not obvious because we're looking at all ideas in some kind of idea-space, rather than only looking at the finite set of ideas people actually propose (as Elliotism and CR do normally do).
  • When an idea has infinite variants, what infinity are we talking about? Is it in one-to-one correspondence with the integers, the reals, or what? Do all ideas with infinite variants have the same sort of infinity variants? Infinity is really tricky, and gets a lot worse when you're doing math or measurement, or trying to be precise in a way that depends on the detailed properties of infinity.
  • There are other ways to get infinite variants other than by varying numerical quantities. One of these approaches uses conjunctions – modify an idea by adding "and X". Does it matter if there are non-numerical ways to get infinite variants? Do they make a difference? Perhaps they are important to understanding the number and density of variants in a region?
  • Are there any cases where there's only finite variants of an idea? Does that matter?
  • You can't actually have 90% or 10% of 2^N and get a whole number. This won't harm the main ideas, but I think it's important to fix detail errors in one's epistemology (which I think you agree with: it's why you specified 2^N ideas, instead saying even or leaving unspecified).
  • Do ideas actually have different numbers of variants? Both for total number, and density. How does one know? How does one figure out total variant count, and density, for a particular idea?
  • How is the distance between two ideas determined? Or whatever is used for judging density.
  • What counts as a variant? In common discussion, we can make do with a loose idea of this. If I start with an idea and then think about a way to change it, that's a variant. This is especially fine when nothing much depends on what is a variant of what. But for measuring solidity, using a method which depends on what is a variant of what, we'll need a more precise meaning. One reason is that some variant construction methods will eventually construct ALL ideas, so everything will be regarded as a variant of everything else. (Example method: take ideas in English, vary by adding, removing or modifying one letter.) Addressing issues like this requires discussion.
  • Where does criticism factor into things?
  • What happens with ideas which we don't know about? Do we just proceed as if none of those exist, or is anything done about them?
  • Does one check his work to make sure he calculated his solidity measurements right? If so, for how long?
  • Is this procedure truth-seeking? Why or why not? Does it create knowledge? If so, how? Is it somehow equivalent to evolution, or not?
  • Why do people have disagreements? Is it exclusively because some people don't know how to measure idea solidity like this, because of calculation errors, and because of different ideas about what they care about?
  • One problem about closeness in terms of what people care about is circularity. Because this method is itself supposed to help people decide things like what to care about.
  • How does this fit with DD's arguments for ideas that are harder to vary? Your approach seems to favor ideas that are easier to vary, resulting in more variants.
  • I suspect there may be lots of variants of "a wizard did it". Is that a good idea? Am I counting its variants wrong? I admit I'm not really counting but just sorta wildly guessing because I don't think you or I know how to count variants.
That is only an offhand sampling of questions and issues. I could add more. And then create new lists questioning some of the answers as they were provided. Regarding what it takes to persuade me, this gives some indication of what kind of level of detail and completeness it takes. (Actually a lot of precision is lost in communication.)

Does this assessment of the situation make sense to you? That you're proposing a complex answer to a major epistemology problem, and there's dozens of questions about it that I'd want answers to. Note: not necessarily freshly written answers from you personally, if there is anything written by you or others at any time.

Do you think you know answers to every issue I listed? And if so, what do you think is the best way for me to learn those full answers? (Note: If for some answers you know where to look them up as needed, instead of always saving them in memory, that's fine.)

Or perhaps you'll explain to me there's a way to live with a bunch of unanswered questions – and a reason to want to. Or maybe something else I haven't thought of.
To try to get at one of the important issues, when and why would you assign X a higher percent (aka strength, plausibility, justification, etc) than Y or than ruminating more? Why would the percents ever be unequal? I say either you have a criticism of an option (so don't do that option), or you don't (so don't raise or lower any percents from neutral). What specifically is it that you think lets you usefully and correctly raise and lower percents for ideas in your decision making process?

I think your answer is you judge positive arguments (and criticisms) in a non-binary way by how "solid" arguments are. These solidity judgments are made arbitrarily, and combined into an overall score arbitrarily.
I think my clarification above of the role of “variant density” as a measure of solidity answers this, but let me know if it doesn’t.
I agree with linking issues. Measuring solidity (aka support aka justification) is a key issue that other things depend on.

It's also a good example issue for the discussion below about how I might be persuaded. If I was persuaded of a working measure of solidity, I'd have a great deal to reconsider.
Sure - and that’s what I claim I do (and also what I claim you in fact do, even though you don’t think you do).
I do claim to do this [quoted below]. Do you think it's somehow incompatible with CR?
On reflection, and especially given your further points below, I’d prefer to stick with Aubreyism and Elliotism rather than justificationism and CR, because I’m new to this field and inadequately clear as to precisely how the latter terms are defined, and because I think the positions we’re debating between are our own rather than other people’s.
OK, switching terminology.

Do you think
doing your best with your current knowledge (nothing special), and also specifically having methods of thinking which are designed to be very good at finding and correcting mistakes.
is incompatible with Elliotism? How?
OK - as above, let’s forget unmodified CR and also unmodified justificationism. I think we’ve established that my approach is not unmodified justificationism, but instead it is (something like) CR triaged by justificationism. I’m still getting the impression that your stated approach, whether or not it’s reeeeally close to CR, is unable to make decisions adequately rapidly for real life, and thus is not what you actually do in real life.
I don't know what to do with that impression.

Do you believe you have a reason Elliotism could not be timely in theory no matter what? Or only a reason Elliotism is not timely today because it's not developed enough and the current approach is flawed, but one day there might be a breakthrough insight so that it can be timely?

I think the timeliness thing is a second key issue. If I was persuaded Elliotism isn't or can't be timely, I'd have a lot to reconsider. But I'm pretty unclear on the specifics of your counter-arguments regarding timeliness.
What's the problem for CR with consensus-low fields?
Speed of decision-making. The faster CR leads to consensus in a given field, the less it needs to be triaged.
OK, I have a rough idea of what you mean. I don't think this is important to our main disagreements.
This is a general CR approach: do something with no proof it will work, no solidity, no feeling of confidence (or if you do feel confidence, it doesn't matter, ignore it). Instead, watch out for problems, and deal with them as they are found.
Again, I can’t discern any difference in practice between that and what I already do.
Can you discern a difference between it and what most people do or say they do?
Oh, sure - I think most people are a good deal more content than me to hold pairs of views that they recognise to be mutually incompatible.
What I was talking about above was an innocent-until-proven-guilty approach to ideas, which is found in both CR and Elliotism (without requiring infallible proof). You indicated agreement, but now bring up the issue of holding contradictory ideas, which I consider a different issue. I am unclear on whether you misunderstood what I was saying, consider these part of the same issue, or what.


Regarding holding contradictory ideas, do you have a clear limit? If I were to adopt Aubreyism, how would I decide which mutually incompatible views to keep or change? If the answer involves degrees of contentness, how do I calculate them?


Part of the Elliotism answer to this issue involves context. Whether ideas relevantly contradict each other is context dependent. Out of context contradictions aren't important. The important thing is to deal with relevant contradictions in one's current context. Put another way: deal with contradictions relevant to choices one makes.

Consider the contradicting ideas of quantum mechanics and general relativity. In a typical dinner-choosing context, neither of those ideas offers a meal suggestion. They both say essentially "no comment" in this context, which doesn't contradict. They aren't taking different sides in the dinner arbitration. I can get pizza for dinner without coming into conflict with either of those ideas.

On the other hand if there was a contradiction in context – basically meaning they are on disagreeing sides in an arbitration – then I'd address that with a win/win solution. Without such a solution, I could only proceed in a win/lose way and the loser would be part of me. And the loser would be chosen arbitrarily or irrationally (because if it weren't, then what was done would be a rational solution and we're back to win/win).

Understanding of context is one of the things which allows Elliotism to be timely. (A refutation of my understanding of context is another thing which would lead to me reconsidering a ton.)

If I were to change my mind and live by Aubreyism, I would require a detailed understanding of how to handle context under Aubreyism (for meals, contradictions, and everything else).
I don’t think our disparate conclusions with regard to the merits of signing up with Alcor arise from you doing the above and me doing something different; I think they arise from our having different criteria for what constitutes a problem. And I don’t think this method allows a determination of which criterion for what constitutes a problem is correct, because each justifies itself: by your criteria, your criteria are correct, and by mine, mine are. (I mentioned this bistability before; I’ve gone back to your answer - Sept 27 - and I don’t understand why it’s an answer.)
Criteria for what is a problem are themselves ideas which can be critically discussed.

Self-justifying ideas which block criticism from all routes are a general category of idea which can be (easily) criticized. They're bad because they block critical discussion, progress, and the possibility of correction if they're mistaken.
OK then: what theoretical sequence of events would conclude with you changing your mind about how you think decisions should be made, in favour of my view?
Starting at the end, I'd have to understand Aubreyism to my satisfaction, think it was right, think Elliotism and (unmodified) CR were both wrong. The exact details are hard to specify in advance because in the sequence of events I would change my mind about what criteria to use when deciding what ideas to favor. So I would not think Aubreyism has no known criticism, rather I'd understand and use Aubreyism's own criteria. And similarly I wouldn't be rejecting Elliotism or CR for having one outstanding criticism (taking into account context), but rather because of some reasons I learned from Aubreyism.

For that matter, I might not have to understand Aubreyism to my satisfaction. Maybe it'd teach me how to adopt ideas without understanding them to my current criteria of satisfaction. It could offer different criteria of satisfaction, but it could also offer a different approach.

So, disclaimer: the below discussion of persuasion contains Elliotist ideas. But if Elliotism is false, then I guess persuasion works some other way, which I don't know and can't speak to.


Starting more at the beginning, my ideas about Elliotism are broadly integrated into my thinking (meaning connected to other ideas). An example area where they are particularly tightly integrated is parenting and education. For ease of reference, my views are called TCS (Taking Children Seriously).

So I'd have to find out things like, if I rejected Elliotism, what views am I to adopt about parenting and education? Is Aubreyism somehow fully compatible with TCS (I don't think so)? Even if it was, I'd have to find out things like how to argue TCS in new ways using Aubreyism instead of Elliotism, there'd be changes.

To give you a sense of the integration, TCS has many essays which explicitly discuss Popper, (unmodified) CR, and Elliotism. A large part of the way TCS was created was applying CR ideas to parenting and education. And also, some TCS concepts played a significant role in creating Elliotism. In addition to TCS learning things from CR, CR can learn from TCS, resulting in a lot of the unmodified-CR/Elliotism differences.

If I'm to change my views on Elliotism and also on TCS, I'll also have to find out why the new views are moral, not immoral (or learn a new approach to morality). I'll have to find out why thousands of written TCS arguments are mistaken, and how far the mistakes go. (Small change in perspective and way of arguing basically saves all the old conclusions? Old conclusions have to be thrown out and recreated with Aubreyism? Somewhere in between?)

And when I try to change my thinking about TCS, I'll run into that fact that it's integrated with many other ideas, so will they have to change to? And they connect to yet more ideas.

So there's this tangled web of ideas. And this is just one area of integration, Elliotism and TCS. Elliotism is also integrated with my politics. And with my opinions of philosophy books. And with my approach to social life. All this could require reevaluation in light of changes to my epistemology.

How can something like this be approached?

It takes a lot of work (which I have willingness to do). One of the general facts of persuasion is, the person being persuaded has to do the large majority of the work. I'd have to persuade myself, with hints and help from you. That is the only way. You cannot make me change my mind, or do most of the work for me.

Though, again, this is an Elliotist view which might not be applicable if you refuted Elliotism. Maybe you can tell me a different way.

(Tangentially, you may note here some incompatibilities with this perspective and how school teachers approach education.)

Another consequence of this integration is that if you persuaded me I was wrong about politics, that could pose a problem for Elliotism. I'd have to figure out where the mistakes were and their full consequences, and that process might involve rejecting Elliotism. If I decide a political idea is false, and there's a chain of ideas from it to an Elliotism idea (which there is), then I'll have to find a mistake in that chain or else rethink part of Elliotism (which is itself linked with the rest of Elliotism and more, posing similar problems). So it could be possible to change my mind about Elliotism without ever discussing it.

Integration of ideas is stabilizing in some ways. If you say I'm wrong about X, I may know a dozen implications of X which I want to figure out how to deal with. This can make it more challenging to provide a satisfactory new view. But integration is also destabilizing because if I do change my mind about X, the implications spread more easily. Persuasion about one point can cause a chain reaction. Especially if I don't block off that chain reaction with a bunch of rationalizations, irrational evasions, refusals to think about implications of ideas, willful disconnections of ideas into more isolated pieces to prevent chain reaction, and so on.

The consequences of a refutation aren't predictable in advance. Maybe it turns out that idea was more isolated than you thought – or less. Maybe you can find mistaken connections near it, maybe not. Until you work out new non-refuted positions, you don't know if it will be a tiny fix or require a whole new philosophy.

Getting back to your question: The sequence of events to change my mind would be large, and largely outside of your control. The majority of it would be outside your view, even if I tried hard to share the process. My integrity would be required.

Ayn Rand says you can't "force a mind". Persuasion has to be voluntary. It's why the person to be persuaded must actively want to learn, and take initiative in the process, not be passive.

However, you could play a critically important role. If you told me one idea (e.g. how to measure solidity), and I worked out the rest from there, you would have had a major role.

More normally, I'd work out a bit from that idea, then ask you a question or argue a point, get your answer, work out a bit more, and so on. And some of your answers would refer me to books and webpages, rather than be written fresh.

It hasn't gone like this so far because I'm experiencing the epistemology discussion as you saying things I've already considered. And frequently already had several debates about. Not exactly identical ideas, but similar in the relevant ways so my previous analysis still applies. Rather than needing to rethink something, I've been using ideas I already know and making minor adjustments to fit the details of our conversation.

I'm also using the discussion to work on ongoing projects like trying to understand Elliotism more clearly, invent better ways to explain it, and better understand where and why people misunderstand it or disagree. I also have more tangential projects like trying to write better.

It's also being used by others who want to understand Elliotism better. People write comments and use things you or I said as a jumping off point for discussions. If you wanted, you could read those discussions and comments.

Those people are also relevant to the issue of a sequence of events in which I'd be persuaded of Aubreyism. If you managed to inspire any doubts about Elliotism, or raise any problems I didn't think I had an answer to, I would raise those issues with others and see what they said. So, via me (both writing and forwarding things), you'd have to end up persuading those people of Aubreyism too. And on the other hand, they could play a big role in persuading me of Aubreyism if they understood one of your correct points before me, and then translated it to my current way of thinking well. (The Aubreyism issue could also create a split and failure to agree, but I wouldn't expect it and I see no signs of that so far.)


I also want to differentiate between full persuasion and superficial persuasion. Sometimes people are persuaded about X pretty easily. But they haven't changed their mind about anything else, so now X contradicts a bunch of their other ideas. A common result is the persuasion doesn't last. Whereas if one is persuaded about X and then makes changes to other ideas until X is compatible with all their thinking, and there's various connections, that'd be a more full kind of persuasion that does a better job of lasting.

One reason superficial persuasion seems to work and last, sometimes, is because of selective attention. People will use idea X if and only if dealing with one particular topic, and not think about other stuff. Then for other topics, they only think about other stuff and not X. So the contradictions between their other ideas and X don't get noticed, because they only think about one or the other at a time.

This further speaks to the complexity and difficulty of rational persuasion.


Getting back to a sequence of events, I don't know a specific one in detail or I'd be persuaded now. What I know is more like the categories of events that would matter and what sorts of things have to happen. (The sequencing, to a substantial extent, is flexible. Like I could learn an epistemology idea and adjust my politics, or vice versa, the sequence can go either way. At least that's the Elliotism view.)

Trying to be more specific, here's an example. You say something I don't have an answer to. It could be about measuring solidity, but it could be about pretty much any of my views I've been explaining because I take them all seriously and they're all integrated. I investigate. I find problems with several of my related ideas. I also consider some related ideas which I don't see any problem with, so I ask you about the issue. My first question is whether you think those ideas are false and I'm missing it, or you think I'm mistaken that they are related.

Trying to fix some of these problems, I run into more problems. Some of them I don't see, but you tell them to me. I start arguing some Aubreyism ideas to others who agree with Elliotism, and learn Aubreyism well enough to win those arguments (although I have to relay back to you a few of their anti-Aubreyism arguments which I'm unable to answer myself. But the more more I do that, the more I pick up on how things work myself, eventually reaching full autonomy regarding Aubreyism). Others then help me with the task of reconciling various things with Aubreyism, such as the material in Popper's books. We do things like decide some parts can be rescued and figuring out how. Other parts have to be rejected, and we work through the implications of that and figure out where and why those implications stop. To do this well involves things like rereading books while keeping in mind some Aubreyism arguments and watching out for contradictions, and thus seeing the book material in a new way compared to prior readings with a different perspective. And it involves going back through thousands of things I and others wrote and using new Aubreyism knowledge to find errors, retract things, write new things about new positions, etc. The more Aubreyism has general principles, the better this will work – so I can find patterns in what has to change instead of dealing with individual cases.

OK, there's a story. Want to tell me a story where you change your mind?
I don’t think anyone does CR, and I also don’t think anyone does the slightly modified CR that you think you do. I think people do a triaged version of CR, and some people do the triaging better than others.
I acknowledge that's your position.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Tim Cook vs Freedom

Tim Cook is gay. He decided to tell the world and use it as an opportunity to campaign against freedom – while invoking the names of Dr. Martin Luther King and Robert F. Kennedy. Cook writes:
The world has changed so much since I was a kid. America is moving toward marriage equality, and the public figures who have bravely come out have helped change perceptions and made our culture more tolerant. Still, there are laws on the books in a majority of states that allow employers to fire people based solely on their sexual orientation. There are many places where landlords can evict tenants for being gay, or where we can be barred from visiting sick partners and sharing in their legacies. Countless people, particularly kids, face fear and abuse every day because of their sexual orientation.
In context, it's clear he's saying it's bad that people can be fired or evicted for being gay.

Cook opposes free trade. He opposes freedom of association. If I don't want to hire someone, with my money, isn't that an issue of freedom of association? Isn't it an issue of freedom not to spend my money on things I don't want? (And isn't it the same issue if I have hiring authority as a proxy for someone else?)

An employer should be able to fire people for no reason at all. Cook wants to make a list of government-approved and governemnt-disapproved reasons for firing, so that we can live in a totalitarian country.

Cook doesn't want a free market where landlords use whatever criteria they deem best for deciding who to rent to. He wants the government to step in and control privately owned buildings. I advocate people interacting only for freely chosen mutual benefit, when they voluntarily want to. Cook advocates that I not be allowed to think for myself about homosexuality issues (is homosexuality so simple there's no room for diversity of opinion?). Cook wants his intolerance of some opinions to be enforced by the government, using guns if necessary.

Cook doesn't want free choice and free thought. He doesn't want freedom. He wants the government to decide how people should act, and make them. He's an authoritarian who wants to force his vision of utopia on everyone else, even though we don't want it.

And Cook is so blind to issues like freedom that it doesn't occur to him to comment on them. He doesn't bother trying to tell us how he isn't destroying freedom. He's so immersed in authoritarian thinking that he doesn't see any legitimate concerns about freedom. He hasn't noticed the issue of freedom and figured out a way to get what he wants while preserving freedom. Freedom isn't on his mind. Diversity of thought isn't on his mind. He's busy demanding "tolerance" of what's already tolerated (tolerance doesn't require liking something or trading with someone), but doesn't consider his own intolerance.

And all this is being said in a tone of moral righteousness. By attacking the American value of freedom, he thinks he's a moral crusader, standing up for justice. Cook values his privacy, but he thought trying to destroy the future of civilization was just so important he had to sacrifice his privacy for the cause.

And Cook is an altruist.
At the same time, I believe deeply in the words of Dr. Martin Luther King, who said: “Life’s most persistent and urgent question is, ‘What are you doing for others?’ ” I often challenge myself with that question, and I’ve come to realize that my desire for personal privacy has been holding me back from doing something more important. That’s what has led me to today.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 14

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
If all you do is partial CR and have two non-refuted options, then they have equal status and should be given equal probability.

When you talk about amounts of sureness, you are introducing something that is neither CR nor dice rolling.
I think you answer this with this:
A reason "strong refutation" seems to make sense is because of something else. Often what we care about is a set of similar ideas, not a single idea. A refutation can binary refute some ideas in a set, and not others. In other words: criticisms that refute many variants of an idea along with it seem "strong”.
That’s basically what I do. I agree with all you go on to say about closeness of variants etc, but I see exploration of variants (and choice of how much to explore variants) as coming down to a sequence of dice-rolls (or, well, coin-flips, since we’re discussing binary choices).
I don't know what this means. I don't think you mean you judge which variants are true, individually, by coin flip.

Maybe the context is only variants you don't have a criticism of. But if several won their coin flips, but are incompatible, then what? So I'm not clear on what you're saying to do.


Also, are you saying that amount of sureness, or claims criticisms are strong or weak (you quote me explaining how what matters is which set of ideas a criticism does or doesn't refute), play no role in what you do? Only CR + randomness?
Also, if you felt 95% sure that X was a better approach than Y – perhaps a lot better – would you really want to roll dice and risk having to do Y, against your better judgment? That doesn't make sense to me.
It makes sense if we remember that the choice I’m actually talking about is not between X and Y, but between X, Y and continuing to ruminate. If I’ve decided to stop ruminating because X feels sufficiently far ahead of Y in the wiseness stakes, then I could just have a policy of always going with X, but I could equally step back and acknowledge that curtailing the rumination constitutes dice-rolling by proxy and just go ahead and do the actual dice-roll so as to feel more honest about my process. I think that makes fine sense.
I think you're talking about rolling dice meaning taking risks in life - which I have no objection to. Whereas I was talking about rolling dice specifically as a decision making procedure for making choices. And that was in context of making an argument which may not be worth looking up at this point, but there you have a clarification if you want.

To try to get at one of the important issues, when and why would you assign X a higher percent (aka strength, plausibility, justification, etc) than Y or than ruminating more? Why would the percents ever be unequal? I say either you have a criticism of an option (so don't do that option), or you don't (so don't raise or lower any percents from neutral). What specifically is it that you think lets you usefully and correctly raise and lower percents for ideas in your decision making process?

I think your answer is you judge positive arguments (and criticisms) in a non-binary way by how "solid" arguments are. These solidity judgments are made arbitrarily, and combined into an overall score arbitrarily. Your defense of arbitrariness, rather than clearly explained methods, is that better isn't possible. If that's right, can you indicate specifically what aspects of CR you consider sometimes impossible, in what kinds of situations, and why it's impossible?

(Most of the time you used the word "subjective" rather than "arbitrary". If you think there's some big difference, please explain. What I see is a clear departure from objectivity, rationality and CR.)
The ways to deal with fallibilism
Do you mean something different here than “fallibility”?
I meant fallibilism, but now that you point it out I agree "fallibility" is a clearer word choice.
are doing your best with your current knowledge (nothing special), and also specifically having methods of thinking which are designed to be very good at finding and correcting mistakes.
Sure - and that’s what I claim I do (and also what I claim you in fact do, even though you don’t think you do).
I do claim to do this. Do you think it's somehow incompatible with CR?

I do have some different ideas than you about what it entails. E.g. I think that it never entails acting on a refuted idea (refuted in the actor's current understanding). And never entails acting on one idea over another merely because of an arbitrary feeling that that idea is better.
You've acknowledged your approach having some flaws, but think it's good enough anyway. That seems contrary to the spirit of mistake correction, which works best when every mistake found is taken very seriously.
Oh no, not at all - my engagement in this discussion is precisely to test my belief that my approach is good enough.
Yes, but you're arguing for the acceptance of those flaws as good enough.
I realize you also think something like one can't do better (so they aren't really flaws since better isn't achievable). That's a dangerous kind of claim though, and also important enough that if it was true and well understood, then there ought be books and papers explaining it to everyone's satisfaction and addressing all the counter-arguments. (But those books and papers do not exist.)
Not really, because hardly anyone thinks what you think. If CR were a widely-held position, there would indeed be such books and papers, but as far as I understand it CR is held only by you, Deutsch and Popper (I restrict myself, of course, to people who have written anything on the topic for public consumption), and Popper’s adherence to it is not widely recognised. Am I wrong about that?
I think wrong. Popper is widely recognized as advocating CR, a term he coined. And there are other Critical Rationalists, for example:

http://www.amazon.com/Critical-Rationalism-Metaphysics-Science-Philosophy/dp/0792329600

This two volume CR book has essays by maybe 40 people.

CR is fairly well known among scientists. Example friendly familiar people include Feynman, Wheeler, Einstein, Medawar.

And there's other people like Alan Forrester ( http://conjecturesandrefutations.com ).

I in no way think that ideas should get hearings according to how many famous or academic people think they deserve hearings. But CR would pass that test.


I wonder if you're being thrown off because what I'm discussing includes some refinements to CR? If the replies to CR addressed it as Popper originally wrote it, that would be understandable.

But there are no quality criticisms of unmodified-CR (except by its advocates who wish to refine it). There's a total lack of any reasonable literature addressing Popper's epistemology by his opponents, and meanwhile people carry on with ideas contradicting what Popper explained.

I wonder also if you're overestimating the differences between unmodified CR and what I've been explaining. They're tiny if you use the differences between CR and Justificationism as a baseline. Like how the difference between Mac and Windows is tiny compared to the difference between a computer and a lightbulb.


Even if Popper didn't exist, any known flaws to be accepted with Justificationism ought to be carefully documented by people in the field. They should write clear explanations about why they think better is impossible in those cases, and why not to do research trying for better since it's bound to fail in ways they already understand, and the precise limits for what we're stuck with, and how to mitigate the problems. I don't think anything good along these lines exists either.
Since we agreed some time ago that mathematical proofs are a field in which pure CR has a particularly good chance of being useful,
I consider CR equally useful in all fields. Substitute "CR" for "reason" in these sentences – which is my perspective – and you may see why.
Sorry, misunderstanding - what I meant was “Since mathematical proofs are a field in which I have less of a problem with a pure CR approach than with most fields, because expert consensus nearly always turns out to be rather rapidly achieved”
I don't think lack of expert consensus in a field is problematic for CR or somehow reduces the CR purity available to an individual.

There are lots of reasons expert consensus isn't reached. Because they don't use CR. Because they are more interested in promotions and reputation than truth. Because they're irrational. Because they are judging the situation with different evidence and ideas, and it's not worth the transaction costs to share everything so they can agree, since there's no pressing need for them to agree.

What's the problem for CR with consensus-low fields?
This is a general CR approach: do something with no proof it will work, no solidity, no feeling of confidence (or if you do feel confidence, it doesn't matter, ignore it). Instead, watch out for problems, and deal with them as they are found.
Again, I can’t discern any difference in practice between that and what I already do.
Can you discern a difference between it and what most people do or say they do?
I don’t think our disparate conclusions with regard to the merits of signing up with Alcor arise from you doing the above and me doing something different; I think they arise from our having different criteria for what constitutes a problem. And I don’t think this method allows a determination of which criterion for what constitutes a problem is correct, because each justifies itself: by your criteria, your criteria are correct, and by mine, mine are. (I mentioned this bistability before; I’ve gone back to your answer - Sept 27 - and I don’t understand why it’s an answer.)
Criteria for what is a problem are themselves ideas which can be critically discussed.

Self-justifying ideas which block criticism from all routes are a general category of idea which can be (easily) criticized. They're bad because they block critical discussion, progress, and the possibility of correction if they're mistaken.
And here is a different answer: You cannot mitigate all the infinite risks that are logically possible. You can't do anything about the "anything is possible" risk, or the general risks inherent in fallibility. What you can do is think of specific categories of risks, and methods to mitigate those categories. Then because you're dealing with a known risk category, and known mitigation methods – not the infinite unknown – you can have some understanding of how big the downsides involved are and the effectiveness of time spent on mitigation. Then, considering other things you could work on, you can make resource allocation decisions.
Same answer - I maintain that that’s what I already do.
Do you maintain that what I've described is somehow not pure CR? The context I was addressing included e.g.:
It seems to me that the existence of cases where people can be wrong for a long time constitutes a very powerful refutation of the practicality of pure CR, since it means one cannot refute the argument that there is a refutation one hasn’t yet thought of.
You were presenting a criticism of CR, and when I talked about how to handle the issues, you've now said stuff along the lines of that's what you already do, indicating some agreement. Are you then withdrawing that criticism of CR? If so, do you think it's just you specifically who does CR (for this particular issue), or most people?

Or more precisely, the issue isn't really whether people do CR - everyone does. It's whether they *say* they do CR, whether they understand what they are doing, and whether they do it badly due to epistemological confusion.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 13

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
So here’s an interesting example of what I mean. I woke up this morning and realised that there is indeed a rather strong refutation of my binary chop argument below, namely “don’t bother, just use X+Y - one doesn’t need to take exactly the minimum amount of time needed, only enough".
I object to the concept of a "strong refutation". I don't think there are degrees or quantities of refutation.

A reason "strong refutation" seems to make sense is because of something else. Often what we care about is a set of similar ideas, not a single idea. A refutation can binary refute some ideas in a set, and not others. In other words: criticisms that refute many variants of an idea along with it seem "strong".

People have some ability to guess whether it will be easy or hard to proceed by finding a workable close variant of the criticized idea. And they may not understand in detail what's going on, so it can seem like a hunch, and be referred in terms of strong or weak criticism.

But:

  • Refuting more or fewer variant ideas is different than degrees of strength. Sometimes the differences matter.
  • Hunches only have value when actually there's some reasonable underlying process being done that someone doesn't know how to put into words. Like this. And it's better to know what's going on so one can know when it will fail, and try to improve one's approach.
  • People can only kinda estimate the prospects for CLOSE variants handling the criticism and continuing on similar to before. This gives NO indication of what may happen with less close variants.
  • This stuff is pretty misleading because either you're aware of a variant idea that isn't refuted, or you aren't. And you can't actually know in advance how well variants you aren't aware of will work.
But consider: yesterday I came up with the binary chop argument and it intuitively felt solid enough that I thought I’d spent enough time looking for refutations of it by the time I sent the email. I was wrong - and for sure I’ve been wrong in the same way many times in the past. But was I wrong to be sure enough of my argument to send the email? I’d say no. That’s because, as I understand your definition of a refutation, I can’t actually fix on a finite Y, because however large I choose Y to be I can always refute it by a pretty meaningful argument, namely by reference to past times when I (or indeed whole communities) have been wrong for a long time.
There are never any guarantees of being correct. Feeling sure is worthless, and no amount of that can make you less fallible.

We should actually basically expect all our ideas to be incorrect and one day be superseded. We're only at the BEGINNING of infinity.

The ways to deal with fallibilism are doing your best with your current knowledge (nothing special), and also specifically having methods of thinking which are designed to be very good at finding and correcting mistakes.

You've acknowledged your approach having some flaws, but think it's good enough anyway. That seems contrary to the spirit of mistake correction, which works best when every mistake found is taken very seriously.

I realize you also think something like one can't do better (so they aren't really flaws since better isn't achievable). That's a dangerous kind of claim though, and also important enough that if it was true and well understood, then there ought be books and papers explaining it to everyone's satisfaction and addressing all the counter-arguments. (But those books and papers do not exist.)
Since we agreed some time ago that mathematical proofs are a field in which pure CR has a particularly good chance of being useful,
I consider CR equally useful in all fields. Substitute "CR" for "reason" in these sentences – which is my perspective – and you may see why.
I direct you to the example of the “Lion and Man” problem, which was incorrectly “solved” for 25 years. It seems to me that the existence of cases where people can be wrong for a long time constitutes a very powerful refutation of the practicality of pure CR, since it means one cannot refute the argument that there is a refutation one hasn’t yet thought of. Thus, we can only answer “yes stop now” in finite time to "Have I done enough effort? Should I do more effort or stop now?” if we’ve already made a quantitative (non-boolean), and indeed subjective and arbitrary, decision as to how much risk we’re willing to take that there is such a refutation.
The possibility of being mistaken is not an argument to consider thinking about an issue indefinitely and never act. And the risk of being mistaken, and consequences, are basically always unknown.

What one needs to do is come up with a method of allocating time, with an explanation of how it works and WHY it's good, and some understanding of what it should accomplish. Then one can watch out for problems, keep an ear open for better approaches known to others, and in either case consider changes to one's method.

This is a general CR approach: do something with no proof it will work, no solidity, no feeling of confidence (or if you do feel confidence, it doesn't matter, ignore it). Instead, watch out for problems, and deal with them as they are found.


And here is a different answer: You cannot mitigate all the infinite risks that are logically possible. You can't do anything about the "anything is possible" risk, or the general risks inherent in fallibility. What you can do is think of specific categories of risks, and methods to mitigate those categories. Then because you're dealing with a known risk category, and known mitigation methods – not the infinite unknown – you can have some understanding of how big the downsides involved are and the effectiveness of time spent on mitigation. Then, considering other things you could work on, you can make resource allocation decisions.

It's only partially understood risks that can be mitigated against, and it's that partial understanding that allows judging what mitigation is worthwhile.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 12

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Just mentioning a quantity in some way doesn't contradict CR.
Fully agreed - but:
The question is, "Have I done enough effort? Should I do more effort or stop now?" That is a boolean question.
Not really, because the answer is a continuum. If X effort is not enough and X+Y effort is enough, then maybe X+Y/2 effort is enough and maybe it isn’t. And, oh dear, one can continue that binary chop forever, which takes infinite time because each step takes finite time. I claim there’s no way to short-circuit that that uses only yes/no questions.
"Is infinite precision useful here? yes/no."

"Is one decimal enough precision for solving the problem we're trying to solve? yes/no"

You don't have to use only yes/no questions, but they play a key role. After these two above, you might use some method to figure out the answer to adequate precision. Then there'd be some more yes/no questions:

"Was that method we used a correct method to use here?"

"Is this answer we got actually the answer that method should arrive at, or did we follow the method wrong?"

"Have we now gotten one answer we're happy with and have no criticism of? Can we, therefore, proceed with it?"
Plus, in the real world, at some point in that process one will in fact decide either that both the insufficiency of X and the sufficiency of X+Y are rebutted, or than neither of them is (which of the two depending on one’s standard for what constitutes a rebuttal) - which indeed terminates the binary chop, but not usefully for a pure-CR approach.
Rebuttals are useful because they have information about the topic of interest. What to do next would depend on what the rebuttals are. Typically they provide new leads. When they don't, that is itself notable and can even be thought of as a lead, e.g. one might learn, "This is much more mysterious than I previously thought, I'll have to look for a new way to approach it and use more precision" – which is a kind of lead.


The standard of a rebuttal, locally, is: does this flaw pointed out by criticism prevent the idea from solving the problem we're trying to solve? yes/no. If no, it's not a criticism IN CONTEXT of the problem being addressed.

But the full standard is much more complicated, because you may say, "Yes that idea will solve that problem. However it will cause these other problems, so don't do it." In other words, the context being considered may be expanded.
Why not roll dice to decide between those remaining ideas? That would be some CR, and timely. Do you think that's an equally good approach? Perhaps better because it eliminates bias.
Actually I’m fine with that (i.e., I recognise that the triage is functionally equivalent to that). In practice I only roll the dice when I think I’m sure enough that I know what the best answer is - so, roughly, I guess I would want to be rolling three dice and going one way if all of them come up six and the other way otherwise - but that’s still dice-rolling.
There's a big perspective gap here.

I had in mind rolling dice with equal probability for each result.

If all you do is partial CR and have two non-refuted options, then they have equal status and should be given equal probability.

When you talk about amounts of sureness, you are introducing something that is neither CR nor dice rolling.

Also, if you felt 95% sure that X was a better approach than Y – perhaps a lot better – would you really want to roll dice and risk having to do Y, against your better judgment? That doesn't make sense to me.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 11

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
I wouldn't draw a distinction there. If you don't know more criticisms, and resolved all the conflicts of ideas you know about, you're done, you resolved things. Whether you could potentially create more criticisms doesn't change that.
OK, of everything you’ve said so far that is the one that I find least able to accept. Thinking of things takes time - you aren’t disputing that. So, if at a given instant I have resolved all the conflicts I know about, but some of what I now think is really really new and I know I haven’t tried to refute it, how on earth can I be “done"?
As you say, you already know that you should make some effort to think critically about new ideas. So, you already have an idea that conflicts with the idea to declare yourself done immediately.

If you know a reason not to do something, that's an idea that conflicts with it.
Ah, but hang on: what do I actually know, there? You’re trying to make it sound boolean by referring to “some” effort, but actually the question is how much effort.
The question is, "Have I done enough effort? Should I do more effort or stop now?" That is a boolean question.

Just mentioning a quantity in some way doesn't contradict CR.
What I know is my past experience of how long it typically took to come up with a refutation of an idea that (before I tried refuting it) felt about as solid as the one I'm currently considering feels. That’s correlation, plain and simple. I’m solely going on my hunch of how solid what I already know feels, or converseiy how likely it is that if I put in a certain amount of time trying to refute what I think I will succeed. So it’s quantitative. I can never claim I’m “done” until I’ve put in what I feel is enough effort that putting in a lot more would still not bring forth a rebuttal. And that estimated amount of effort again comes from extrapolation from my past experience of how fast I come up with rebuttals.

To me, the above is so obvious a rebuttal
I think your rebuttal relies on CR being incompatible with dealing with any sort of quantity – a misconception I wasn't able to predict. Otherwise why would a statement of your approach be a rebuttal to CR?

It's specifically quantities of justification – of goodness of ideas – that CR is incompatible with.
of what you said that it makes no sense that you would not have come up with it yourself in the time it took you to write the email. That’s what I meant about your answers getting increasingly weak.
We have different worldviews, and this makes it hard to predict what you'll say. It's especially hard to predict replies I consider false. I could try to preemptively answer more things, but some won't be what you would have said, and longer emails have disadvantages.
I mean that it’s becoming easier and easier to come up with refutations of what you’re saying, and it seems to me that it’s becoming harder and harder for you to refute what I say - not that you’re finding it harder, but that the refutations you're giving are increasingly fragile. To my ear, they’re rapidly approaching the “that’s dumb, I disagree” level. And I don’t know what situation there would be that would make them sound like that to you too. You said earlier on that "It's hard to keep up meaningful criticism for long” and I said "That’s absolutely not my experience” - this is what I meant.
Justificationists always sneak in some an ad hoc, poorly specified, unstated-and-hidden-from-criticism version of CR into their thinking, which is why they are able to think at all.
This is what you were doing when saying you clarified that meant Aubreyism step 1 to include creative and critical thinking.
Yes, absolutely. I don’t think I know what pure justificationism is, but for sure I agree (as I have since the start of our exchange) that CR is a better way to proceed than just by hunches and correlations.

Proceed by which correlations? Why those instead of other ones? How do you get from "X correlates with Y [in Z context]" to "I will decide A over B or C [in context D]"? Are any explanations involved? I don't know the specifics of your approach to correlations.

We've discussed correlations some, but our perspectives on the matter are so different that it wasn't easy to create full mutual understanding. It'll take some more discussion. More on this below.
Thus, indeed Aubreyism is a hybrid between the two - it uses CR as a way to make decisions, but with a triage mechanism so that those decisions can be made in acceptable time. I’m fine with the idea that the triage part contributes no value in and of itself, because what it does do, instead, is allow the value from the CR part to manifest itself in real-world actions in a timely fashion.
Situation: you have 10 ideas, eliminate 5-8 with some CR tools, and run out of time to ponder.

You propose deciding between the remaining ideas with hunches. You say this is good because it's timely. You say the resulting value comes from CR + timeliness.

Why not roll dice to decide between those remaining ideas? That would be some CR, and timely. Do you think that's an equally good approach? Perhaps better because it eliminates bias.

I suspect you'll be unwilling to switch to dice. Meaning you believe the hunches have value other than timeliness. Contrary to your comments above.

What do you think?
More generally, going back to my assertion that you do in fact make decisions in just the same way I do, I claim that this subjective, quantitative, non-value-adding evaluation of how different two conflicting positions feel in their solidity, and thus of how much effort one should put into further rebutting each of them, is an absolutely unavoidable aspect of applying CR in a timely fashion.
In my view, I explained how CR can finish in time. At this point, I don't know clearly and specifically why you think that method doesn't work, and I'm not convinced you understand the method well enough to evaluate. Last email, I pointed out that some of your comments are too vague to be answerable. You didn't elaborate on those points.

Bigger picture, let's try to get some perspective.

Epistemology is COMPLEX. Communication between different perspectives is VERY HARD.

When people have very different ideas, misunderstandings happen constantly, and patient back-and-forth is needed to correct them. Things that are obvious in one perspective will need a lot of clarification to communicate to another perspective. An especially open minded and tolerant approach is needed.

We are doing well at this. We should be pleased. We've gotten somewhere. Most people attempting similar things fail spectacularly.

You understand where I'm coming from better now, and vice versa. We know outlines of each other's positions. And we have a much more specific idea of what we do and don't agree about. We've discovered timely CR is a key issue.

People get used to talking to similar people and expect conversations to proceed rapidly. Less has to be communicated, because only differences require much communication. People often omit some details, but the other guy with many shared premises fills in the blanks similarly. People also commonly gloss over disagreements to be polite.

So people often experience communication as easy. Then when it isn't, they can get frustrated and give up in the face of misunderstandings and disagreements.

And justificationism is super popular, so epistemology conversations often seem to go smoothly. Similar to how most regular people would smoothly agree with each other that death from aging is good. Then when confronted with SENS, problems start coming up in the discussion and they don't have the skills to deal with those problems.

Talking to people who think differently is valuable. Everyone has some blind spots and other mistakes, and similar people will share some of the same weaknesses. A different person, even if worse than you, could lack some of your weaknesses. Trading ideas between people with different perspectives is valuable. It's a little like comparative advantage from economics.

But the more different someone is, the more difficult communication is. Attitudes to discussion have to be adjusted.

We should be pleased to have a significant amount of successful communication already. But the initial differences were large. There's still a lot of room to understand each other better.

I think you haven't discussed some details so far (including literally not replying to some points) – and then are reaching tentative conclusions about them without full communication. That's fine for initial communication to get your viewpoint across. It works as a kind of feeling out stage. But you shouldn't expect too much from that method.

If you want to reach agreement, or understand CR more, we'll have to get into some of those details. We now have a better framework to do that.

So if you're interested, I think we may be able to focus the discussion much more, now that we have more of an outline established. To start with:

Do you think you have an argument that makes timely CR LITERALLY IMPOSSIBLE, in general, for some category of situations? Just a yes or no is fine.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Leftist Lying: The Issue Is Never the Issue

From Take No Prisoners: The Battle Plan for Defeating the Left by David Horowitz, on leftist lying:
Dishonesty is endemic to the progressive cause because its radical goals cannot be admitted; the dishonesty is a cultural inheritance, instinctive and indispensable. It is no coincidence that Barack Obama, a born-and-bred leftist, is the most compulsive and brazen liar ever to occupy the White House. His true agenda is radical and unpalatable, and therefore he needs to lie about it. What other presidential candidate could have successfully explained away his close association for twenty years with an anti-American racist, Jeremiah Wright, and an anti-American terrorist, William Ayers? Who but the ignorant and the progressively blind could have believed him?

The radical sixties were something of an aberration in that its activists were uncharacteristically candid about their goals. A generation of “new leftists” was rebelling against its Stalinist parents, who had pretended to be liberals to hide their real beliefs and save their political skins. New leftists despised what they thought was the cowardice behind this camouflage. As a “New Left,” they were determined to say what they thought and blurt out their desires: “We want a revolution, and we want it now.” They were actually rather decent to warn others about what they intended. But when they revealed their goals, they set off alarms and therefore didn’t get very far.

Those who remained committed to leftist goals after the sixties learned from their experience. They learned to lie. The strategy of the lie became the new progressive gospel. It is what Alinsky’s Rules for Radicals is really about. Alinsky understood the mistake sixties radicals had made. His message at the time, and to the generations who came after, is easily summarized: Don’t telegraph your goals; infiltrate the Democratic Party and other liberal institutions and subvert them; treat moral principles as dispensable fictions; and never forget that your political agenda is not the achievement of this or that reform but political power to achieve the socialist goal. The issue is never the issue. The issue is always power—how to wring power out of the democratic process, how to turn the political process into an instrument of control, how to use that control to fundamentally transform the United States of America, which is exactly what Barack Obama, on the eve of his election, warned he would do.
I recommend the book.

Though beware, some of the scholarship is flawed. Justin brought this passage to my attention:
In the fifth year of Obama’s rule, forty-seven million Americans were on food stamps and a hundred million were receiving government handouts, while ninety-three million Americans of working age had given up on finding a job and left the work force.
The ninety-three million statistic is given without a source. I investigated a bit and I don't think it's accurate.

But I still think it's a great book.

Elliot Temple | Permalink | Message (1)

Bad Correlation Study

Here is a typical example of a bad correlation study. I've pointed out a couple flaws, which are typical.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3039704/
Chocolate Consumption is Inversely Associated with Prevalent Coronary Heart Disease: The National Heart, Lung, and Blood Institute Family Heart Study
These data suggest that consumption of chocolate is inversely related with prevalent CHD in a general population.
Of 4,679 individuals contacted, responses were obtained from 3,150 (67%)
So they started with a non-random sample. The two thirds of people who responded were not random.

This non-random sample they studied may have some attribute, X, much more than the general population. It may be chocolate+X interactions which offer health benefits. This is a way the study conclusions could be false.

They used a "food frequency questionnaire". So you get possibilities like: half the people reporting they didn't eat chocolate were lying (but very few of the people admitting to eating chocolate were lying). And liars overeat fat much more than non-liars, and this fat eating differential (not chocolate eating) is the cause of the study results. This is another way the study conclusions could be false.

They say they used "used generalized estimating equations", but do not provide the details. There could be an error there so that their conclusions are false.

They talk about controls:
adjusting for age, sex, family CHD risk group, energy intake, education, non-chocolate candy intake, linolenic acid intake, smoking, alcohol intake, exercise, and fruit and vegetables
As you can see, this is nothing like a complete list of every possible relevant factor. There are many things they did not control for. Some of those may have been important, so this could ruin their results.

And they don't provide details of how they controlled for these things. For example, take "education". Did they lump together high school graduates (with no college) as all having the same amount of education, without factoring in which high school they went to and how good it was? Whatever they did, there will be a level of imprecision in how they controlled for education, which may be problematic (and we don't know, because they don't tell us what they did).


This is just a small sample of the problems with studies like these.


People often reply something like, "Nothing's perfect, but aren't the studies pretty good indications anyway?" The answer is, if it's pretty good anyway, they ought to understand these weaknesses, write them down, and then write down why their results are pretty good indications anyway. Then that reasoning would be exposed to criticism. One shouldn't assume the many weaknesses of the research can be glossed over without actually writing them down, thoroughly, and writing down why it's OK, in full, and then seeing if there are criticisms of that analysis.

Elliot Temple | Permalink | Messages (0)