Andy B Harassment Continues

Andy B has been harassing my FI community using many false identities. He left after I caught and exposed him, but he returned in Aug 2020. He’s written over 100 new curi.us messages under the names Periergo and Anonymous, and his Periergo Less Wrong account has been banned by Less Wrong for targeted harassment against me.

Unfortunately, he succeeded at his goal of destroying my discussions with Less Wrong.

Andy’s actions – including threats, doxxing, spamming, infiltrating the FI Discord with multiple sock puppets for months, and posting hundreds of harassing curi.us messages – violate multiple laws. He’s attacked several other FI members, not just me. His real name is unknown.

If anyone is actually willing to discuss this matter, I will provide additional evidence as appropriate. I have extensive documentation. I already posted evidence, and none of the facts are disputed.

Andy’s Friends

Andy is a David Deutsch (DD) fan who is friends with the “CritRat” DD fan community, including the “Four Strands” subgroup. They have turned a blind eye to Andy’s actions. They’ve refused to ask him to stop or to say that they think harassment is bad. The CritRat community is toxic and has also been an ongoing source of (milder) trouble from people besides Andy.

Andy’s friends include many of DD’s associates and CritRat community leaders. They know what he’s done but apparently don’t care. They’re providing him with encouragement and legitimacy in a social group, and some of them have egged him on. The public communications with Andy that I link below are all from months after Andy’s harassment was exposed.

  • Lulie Tanett has friendly tweets with Andy (related, she tweets saying we need to use force and threats, which she considers a useful “technology”). She’s DD’s current closest associate and long time IRL friend, who he often promotes on Twitter and does joint projects like videos with. She’s promoted on DD’s website. She has a history of knowingly associating with people like online harassers, doxxers and spam botters.
  • Sarah Fitz-Claridge follows Andy on Twitter. She co-founded Taking Children Seriously with DD and is his long time IRL friend. She has a hateful attitude towards ET.
  • Sarah’s husband has friendly communications with Andy on Twitter. He’s had discussions with DD for many years. He’s said hateful things about ET.
  • Brett Hall tweets with Andy (examples 2 and 3). He’s promoted on DD’s website and by DD’s tweets, and he’s said hateful things about ET.
  • Samuel Kuypers tweets with Andy. He’s promoted on DD’s website and recently co-authored a physics paper with DD.
  • Bruce Nielson tweets with Andy (more). He’s a Four Strands leader/moderator.
  • Aaron Stupple tweets with Andy. He’s a Four Strands leader/moderator.
  • Dennis Hackethal talks with Andy publicly and was co-moderator of a DD related subreddit with Andy. He’s a Four Strands leader/moderator who has libeled and plagiarized ET. DD has promoted him on Twitter.

All of these people, as well as DD, have so far refused to communicate about this problem. They apparently have no interest in a truce or deescalation. They’re making the problem worse.

They’ve stated no grievances against FI, no terms they want, no willingness to negotiate, and no approaches to problem solving that they’d try. They’ve given no explanation of how they view the Andy problem, and they haven’t said anything to discourage the harassment coming from their community. They haven’t made no contact requests either; they just ghost me and others without explanation. (Except Dennis asked me not to email him again about Andy, which I haven’t.) I’m willing to communicate using proxies, involve a neutral mediator, or take other reasonable steps.

The situation is asymmetric. The FI community is peaceful. Harassment doesn’t come from FI towards CritRats or anyone else. If any FI member did harass someone, I’d ask them to stop or ban them, rather than encouraging them. (Or I’d discuss my doubts about the accusation, if I had any. What I wouldn’t do is ignore the matter with no comment, and ghost the victim, while continuing a friendly relationship with the person accused of extensive harassment, illegal actions and aggressive force.)

Warning

Andy hasn’t harassed FI since his Less Wrong account was banned recently. Maybe he’s decided to leave me alone because he got caught again? I hope so. Or maybe he’ll continue on any day.

Despite Andy’s repeated aggression against FI, as well as the misdeeds of other CritRats, I would still prefer to deescalate the situation.

But this is a chronic problem which is doing major harm, and Andy has a pattern of returning to harass again. I’ve been extraordinarily patient and forgiving, but this can’t go on forever. Andy started harassing us two years ago. If any CritRats are willing to speak to me about deescalating or improving this situation, please contact me (comment below, email [email protected] or use Discord). So far the communications of myself and others just get ignored by CritRats. They’ve repeatedly ghosted the victims instead of the harassers.

So I’m issuing a warning: If Andy comes back to harass me again, I will hold his supporters accountable. If you’re encouraging Andy while not even giving lip service to peace, and you’re refusing to communicate about any conflict resolution, then I will blame you and take defensive actions like writing about how you’re violating my rights and sharing evidence. I’ll particularly criticize the community leaders, especially the top leader, DD. If (like me) you don’t want this outcome, clean up your community and stop harassing FI.


Elliot Temple | Permalink | Messages (57)

Less Wrong Banned Me

habryka wrote about why LW banned me. This is habryka’s full text plus my comments:

Today we have banned two users, curi and Periergo from LessWrong for two years each. The reasoning for both is bit entangled but are overall almost completely separate, so let me go individually:

The ban isn’t for two years. It’s from Sept 16 2020 through Dec 31 2022.

They didn’t bother to notify me. I found out in the following way:

First, I saw I was logged out. Then I tried to log back in and it said my password was wrong. Then I tried to reset my password. When submitting a new password, it then gave an error message saying I was banned and until what date. Then I messaged them on intercom and 6 hours later they gave me a link to the public announcement about my ban.

That’s a poor user experience.

Periergo is an account that is pretty easily traceable to a person that Curi has been in conflict with for a long time, and who seems to have signed up with the primary purpose of attacking curi. I don't think there is anything fundamentally wrong about signing up to LessWrong to warn other users of the potentially bad behavior of an existing user on some other part of the internet, but I do think it should be done transparently.

It also appears to be the case that he has done a bunch of things that go beyond merely warning others (like mailbombing curi, i.e. signing him up for tons of email spam that he didn't sign up for, and lots of sockpupetting on forums that curi frequents), and that seem better classified as harassment, and overall it seemed to me that this isn't the right place for Periergo.

Periergo is a sock puppet of Andy B. Andy harassed FI long term with many false identities, but left for months when I caught him, connected the identities, and blogged it. But he came back in August 2020 and has written over 100 comments since returning, and he made a fresh account on Less Wrong for the purpose of harassing me and disrupting my discussions there. He essentially got away with it. He stirred up trouble and now I’m banned. What does he care that his fresh sock puppet, with a name he’ll likely never use again anywhere, is banned? And he’ll be unbanned at the same time as me in case he wants to further torment me using the same account.

Curi has been a user on LessWrong for a long time, and has made many posts and comments. He also has the dubious honor of being by far the most downvoted account in all of LessWrong history at -675 karma.

I started at around -775 karma when I returned to Less Wrong recently and went up. I originally debated Popper, induction and cognitive biases at LW around 9 years ago and got lots of downvotes. I returned around 3 years ago when an LW moderator invited me back because he liked my Paths Forward article. That didn’t work out and I left again. I returned recently for my own reasons, instead of because someone incorrectly suggested that I was wanted, and it was going better. I knew some things to expect, and some things that wouldn’t work, and I'd just read LW's favorite literature, RAZ.

BTW, I don’t know how my karma is being calculated. My previous LW discussions were at the 1.0 version of the site where votes on posts counted for 10 karma, and votes on comments counted for 1 karma. When I went back the second time, a moderator boosted my karma enough to be positive so that I could write posts instead of just comments. LW 2.0 allows you to write posts while having negative karma and votes on posts and comments are worth the same amount, but your votes count for multiple karma if you have high karma and/or use the strong vote feature. I don’t know how old stuff got recalculated when they did the version 2.0 website.

Overall I have around negative 1 karma per comment, so that’s … not all that bad? Or apparently the lowest ever. If downvotes on the old posts still count 10x then hundreds of my negative karma is from just a few posts.

In general, I think outliers should be viewed as notable and potentially valuable, especially outliers that you can already see might actually be good (as habryka says about me below). Positive outliers are extremely valuable.

The biggest problem with his participation is that he has a history of dragging people into discussions that drag on for an incredibly long time, without seeming particularly productive, while also having a history of pretty aggressively attacking people who stop responding to him. On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack. It's first sentence is "This is a list of ppl who had discussion contact with FI and then quit/evaded/lied/etc.", and in-particular the framing of "quit/evaded/lied" sure sets the framing for the rest of the post as a kind of "wall of shame".

I consider it strange to ban me for stuff I did in the distant past but was not banned for at the time.

I find it especially strange to ban me for 2 years over stuff that’s already 3 or 9 years old (the evaders guest post by Alan is a year old, and btw "evade" is standard Objectivist philosophy terminology). I already left the site for longer than the ban period. Why is a 5 year break the right amount instead of 3? habryka says below that he thinks I was doing better (from his point of view and regarding what the LW site wants) this time.

They could have asked me about that particular post before banning me, but didn’t. They also could have noted that it’s an old post that only came up because Andy linked it twice on LW with the goal of alienating people from me. They’re letting him get what he wanted even though they know he was posting in bad faith and breaking their written rules.

I, by contrast, am not accused of breaking any specific written rule that LW has, but I’ve been banned anyway with no warning.

Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it's better for curi to find other places as potential discussion venues.

I didn’t threaten anyone. I’m guessing it was a careless wording. I think habryka should retract or clarify it. Above habryka used “attack[]” as a synonym for criticize. I don’t like that but it’s pretty standard language. But I don’t think using “threat[en]” as a synonym for criticize is reasonable.

“threaten” has meanings like “state one's intention to take hostile action against someone in retribution for something done or not done” and “express one's intention to harm or kill“ (New Oxford Dictionary). This is the thing in the post that I most strongly object to.

I do really want to make clear that this is not a personal judgement of curi. While I do find the "List of Fallible Ideas Evaders" post pretty tasteless, and don't like discussing things with him particularly much, he seems well-intentioned, and it's quite plausible that he could me an amazing contributor to other online forums and communities. Many of the things he is building over on his blog seem pretty cool to me, and I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities.

I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from), but they still don't strike me as great contributions to the LessWrong canon, are all low-karma, and I assign too high of a probability that old patterns will repeat themselves (and also that his presence will generally make people averse to be around, because of those past patterns). He has also explicitly written a post in which he updates his LW commenting policy towards something less demanding, and I do think that was the right move, but I don't think it's enough to tip the scales on this issue.

So I came back after 3 years, posted in a way they liked significantly better … I’m building cool things and plausibly amazing while also making major progress at compatibility with LW … but they’re banning me anyway, even though my old posts didn’t get me banned.

More broadly, LessWrong has seen a pretty significant growth of new users in the past few months, mostly driven by interest in Coronavirus discussion and the discussion we hosted on GPT3. I continue to think that "Well-Kept Gardens Die By Pacifism", and that it is essential for us to be very careful with handling that growth, and to generally err on the side of curating our userbase pretty heavily and maintaining high standards. This means making difficult moderation decision long before it is proven "beyond a reasonable doubt" that someone is not a net-positive contributor to the site.

In this case, I think it is definitely not proven beyond a reasonable doubt that curi is overall net-negative for the site, and banning him might well be a mistake, but I think the probabilities weigh heavily enough in favor of the net-negative, and the worst-case outcomes are bad-enough, that on-net I think this is the right choice.

I don’t see why they couldn’t wait for me to do something wrong to ban me, or give me any warning or guidance about what they wanted me to do differently. I doubt this would have happened this way if Andy hadn’t done targeted harassment.

At least they wrote about their reasons. I appreciate that they’re more transparent than most forums.

In another message, habryka clarified his comment about others not updating their views of me based on this ban:

The key thing I wanted to communicate is that it seems quite plausible to me that these patterns are the result of curi interfacing specifically with the LessWrong culture in unhealthy ways. I can imagine him interfacing with other cultures with much less bad results.

I also said "I don't want others to think this is much evidence", not "this is no evidence". Of course it is some evidence, but I think overall I would expect people to update a bit too much on this, and as I said, I wouldn't be very surprised to see curi participate well in other online communities.

I’m unclear on what aspect of LW culture that I’m a mismatch for. Or put another way: I may interface better with other cultures which have or lack what particular characteristics compared to LW?


Also, LW didn't explain how they decided on ban lengths. 2.3 year bans don't correspond to solving the problems raised. Andy or I could easily wait and then do the stuff LW doesn't want. They aren't asking us to do anything to improve or to provide any evidence that we've reformed in some way. Nor are they asking us to figure out how we can address their concerns and prevent bad outcomes. They're just asking us to wait and, I guess, counting on us not to hold grudges. Problems don't automatically go away due to time passing.

Overall, I think LW’s decision and reasoning are pretty bad but not super unreasonable compared to the general state of our culture. I wouldn’t expect better at most forums and I’ve seen much worse. Also, I’m not confident that the reasoning given fully and accurately represents the actual reasons. I'm not convinced that they will ban other people using the same reasoning like that they didn't break any particular rules but might be a net-negative for the site, especially considering that "the moderators of LW are the opposite of trigger-happy. Not counting spam, there is on average less than one account per year banned." (source from 2016, maybe they're more trigger-happy in 2020, I don't know).


Elliot Temple | Permalink | Messages (14)

Elliot Temple | Permalink | Messages (276)

curi's Microblogging

This is a thread for me to post stuff that's smaller than a blog post. You can reply and discuss here but don't start your own topics here. You can do that in Open Discussion or at any relevant post.


Elliot Temple | Permalink | Messages (212)

Eliezer Yudkowsky Is a Fraud

Eliezer Yudkowsky tweeted:

EY:

What on Earth is up with the people replying "billionaires don't have real money, just stocks they can't easily sell" to the anti-billionaire stuff? It's an insanely straw reply and there are much much better replies.

DI:

What would be a much better reply to give to someone who thinks for example that Elon Musk is hoarding $100bn in his bank account?

EY:

A better reply should address the core issue whether there is net social good from saying billionaires can't have or keep wealth: eg demotivating next Steves from creating Apple, no Gates vaccine funding, Musk not doing Tesla after selling Paypal.

Eliezer Yudkowsky (EY) frequently brings up names (e.g. Feynman or Jaynes) of smart people involved with science, rationality or sci-fi. He does this throughout RAZ. He communicates that he's read them, he's well-read, he's learned from them, he has intelligent commentary related to stuff they wrote, etc. He presents himself as someone who can report to you, his reader, about what those books and people are like. (He mostly brings up people he likes, but he also sometimes presents himself as knowledgeable about people he's unfriendly to, like Karl Popper and Ayn Rand, who he knows little about and misrepresents.)

EY is a liar who can't be trusted. In his tweets, he reveals that he brings up names while knowing basically nothing about them.

Steve Jobs and Steve Wozniak were not motivated by getting super rich. Their personalities are pretty well known. I guess EY never read any of the biographies and hasn't had conversations about them with knowledgeable people. Or maybe he doesn't connect what he reads to what he says. (I provide some brief, example evidence at the end of this post in which Jobs tells Ellison "You don’t need any more money." EY is really blatantly wrong.)

EY brings up Jobs and Wozniak ("Steves") to make his assertions sound concrete, empirical and connected to reality. Actually he's doing egregious armchair philosophizing and using counter examples as examples.

Someone who does this can't be trusted whenever they bring up other names either. It shows a policy of dishonesty: either carelessness and incompetence (while dishonestly presenting himself as a careful, effective thinker) or outright lying about his knowledge.

There are other problems with the tweets, too. For example, EY is calling people insane instead of arguing his case. And EY is straw manning the argument about billionaires having stocks not cash – while complaining about others straw manning. Billionaires have most of their wealth in capital goods, not consumption goods (that's the short, better version of the argument he mangled), and that's a more important issue than the incentives that EY brings up. EY also routinely presents himself as well-versed in economics but seems unable to connect concepts like accumulation of capital increasing the productivity of labor, or eating the seed corn, to this topic.

Some people think billionaires consume huge amounts of wealth – e.g. billions of dollars per year – in the form of luxuries or other consumption goods. Responding to a range of anti-billionaire viewpoints, including that one, by saying basically "They need all that money so they're incentivized to build companies." is horribly wrong. They don't consume anywhere near that much wealth per year. EY comes off as justifying them doing something they don't do that would actually merit concern if they somehow did it.

If Jeff Bezos were building a million statues of himself, that'd be spending billions of dollars on luxuries/consumption instead of production. That'd actually somewhat harm our society's capital accumulation and would merit some concern and consideration. But – crucial fact – the real world looks nothing like that. EY sounds like he's conceding that that's actually happening instead of correcting people about reality, and he's also claiming it's obviously fine because rich people love their statues, yachts and sushi so much that it's what inspires them to make companies. (It's debateable, and there are upsides, but it's not obviously fine.)


Steve Jobs is the authorized biography by Walter Isaacson. It says (context: Steve didn't want to do a hostile takeover of Apple) (my italics):

“You know, Larry [Ellison], I think I’ve found a way for me to get back into Apple and get control of it without you having to buy it,” Jobs said as they walked along the shore. Ellison recalled, “He explained his strategy, which was getting Apple to buy NeXT, then he would go on the board and be one step away from being CEO.” Ellison thought that Jobs was missing a key point. “But Steve, there’s one thing I don’t understand,” he said. “If we don’t buy the company, how can we make any money?” It was a reminder of how different their desires were. Jobs put his hand on Ellison’s left shoulder, pulled him so close that their noses almost touched, and said, “Larry, this is why it’s really important that I’m your friend. You don’t need any more money.

Ellison recalled that his own answer was almost a whine: “Well, I may not need the money, but why should some fund manager at Fidelity get the money? Why should someone else get it? Why shouldn’t it be us?”

“I think if I went back to Apple, and I didn’t own any of Apple, and you didn’t own any of Apple, I’d have the moral high ground,” Jobs replied.

“Steve, that’s really expensive real estate, this moral high ground,” said Ellison. “Look, Steve, you’re my best friend, and Apple is your company. I’ll do whatever you want.”

(Note that Ellison, too, despite having a more money-desiring attitude, didn't actually prioritize money. He might be the richest man in the world today if he'd invested heavily in Steve Jobs' Apple, but he put friendship first.)


Elliot Temple | Permalink | Messages (3)

Learning Updates Thread

If you want to learn philosophy or rational thinking, you need to do some stuff on a regular basis. Read books, write notes, write outlines, write articles, write journaling, study stuff, have discussions, etc.

I suggest you write a short, weekly update. How did your week go? What did you do? Did you make progress on your goals? (Figure out some goals and write them down. If in doubt, talk about it or read and watch a wide variety of things.) Do you want to make any changes going forward? Sharing this update is optional. You could do it like journaling.

Write a longer, monthly update. Reflect more on how learning is going, what's working or not working, whether you should adjust any goals or stop or start any projects, what got done or not, etc.

Sharing monthly updates is recommended. If you don't share monthly updates or explain why not, I will not regard you as actually trying to learn philosophy.

I think it'd be best if a bunch of people shared monthly updates at the same time. So let's use the first of the month. Post them below. Put the month and your name in the title field when posting a monthly update, and leave the title blank for anything else, so the monthly updates stand out more.

Posting on your own website and sharing a link here is fine too. With the link, include at least one paragraph of text with some summary and some info to interest people in clicking the link.


Elliot Temple | Permalink | Messages (38)

Mathematical Inconsistency in Solomonoff Induction?

I posted this on Less Wrong 10 days ago. At the end, I summarize the answer they gave.


What counts as a hypothesis for Solomonoff induction? The general impression I’ve gotten in various places is “a hypothesis can be anything (that you could write down)”. But I don’t think that’s quite it. E.g. evidence can be written down but is treated separately. I think a hypothesis is more like a computer program that outputs predictions about what evidence will or will not be observed.

If X and Y are hypotheses, then is “X and Y” a hypothesis? “not X”? “X or Y?” If not, why not, and where can I read a clear explanation of the rules and exclusions for Solomonoff hypotheses?

If using logic operators with hypotheses does yield other hypotheses, then I’m curious about a potential problem. When hypotheses are related, we can consider what their probabilities should be in more than one way. The results should always be consistent.

For example, suppose you have no evidence yet. And suppose X and Y are independent. Then you can calculate the probability of P(X or Y) in terms of the probability of P(X) and P(Y). You can also calculate the probability of all three based on their length (that’s the Solomonoff prior). These should always match but I don’t think they do.

The non-normalized probability of X is 1/2^len(X).

So you get:

P(X or Y) = 1/2^len(X) + 1/2^len(Y) - 1/2^(len(X)+len(Y))

and we also know:

P(X or Y) = 1/2^len(X or Y)

since the left hand sides are the same, that means the right hand sides should be equal, by simple substitution:

1/2^len(X or Y) = 1/2^len(X) + 1/2^len(Y) - 1/2^(len(X)+len(Y))

Which has to hold for any X and Y.

We can select X and Y to be the same length and to minimize compression gains when they’re both present, so len(X or Y) should be approximately 2len(X). I’m assuming a basis, or choice of X and Y, such that “or” is very cheap relative to X and Y, hence I approximated it to zero. Then we have:

1/2^2len(X) = 1/2^len(X) + 1/2^len(X) - 1/2^2len(X)

which simplifies to:

1/2^2len(X) = 1/2^len(X)

Which is false (since len(X) isn’t 0). And using a different approximation of len(X or Y) like 1.5len(X), 2.5len(X) or even len(X) wouldn’t make the math work.

So Solomonoff induction is inconsistent. So I assume there’s something I don’t know. What? (My best guess so far, mentioned above, is limits on what is a hypothesis.)

Also here’s a quick intuitive explanation to help explain what’s going on with the math: P(X) is both shorter and less probable than P(X or Y). Think about what you’re doing when you craft a hypotheses. You can add bits (length) to a hypothesis to exclude stuff. In that case, more bits (more length) means lower prior probability, and that makes sense, because the hypothesis is compatible with fewer things from the set of all logically possible things. But you can also add bits (length) to a hypothesis to add alternatives. It could be this or that or a third thing. That makes hypotheses longer but more likely rather than less likely. Also, speaking more generally, the Solomonoff prior probabilities are assigned according to length with no regard for consistency amongst themselves, so its unsurprising that they’re inconsistent unless the hypotheses are limited in such a way that they have no significant relationships with each other that would have to be consistent, which sounds hard to achieve and I haven’t seen any rules specified for achieving that (note that there are other ways to find relationships between hypotheses besides the one I used above, e.g. looking for subsets).


Less Wrong's answer, in my understanding, is that in Solomonoff Induction a "hypothesis" must make positive predictions like "X will happen". Probabilistic positive predictions – assigning probabilities to different specific outcomes – can also work. Saying X or Y will happen is not a valid hypothesis, nor is saying X won't happen.

This is a very standard trick by so-called scholars. They take a regular English word (here "hypothesis") and define it as a technical term with a drastically different meaning. This isn't clearly explained anywhere and lots of people are misled. It's also done with e.g. "heritability".

Solomonoff Induction is just sequence prediction. Take a data sequence as input, then predict the next thing in the sequence via some algorithm. (And do it with all the algorithms and see which do better and are shorter.) It's aspiring to be the oracle in The Fabric of Reality but worse.


Elliot Temple | Permalink | Messages (5)

Use RSS to Subscribe to Blogs

RSS feeds let you get updates when a website has new stuff. You can subscribe to sites you're interested in and then get notifications about new material instead of checking for it yourself. This works especially well with sites that don't update often.

You should subscribe to my feeds:

You need an RSS Reader app. I like Vienna, a free open source Mac app. There are many others, e.g. BazQux is a web app that my friend likes.

Many apps will let you import RSS feeds instead of adding them all yourself. Download some of my subscriptions to get started. After importing, you can delete whatever you don't want.

You should also sign up for my free Critical Fallibilism emails.

Most blogs and similar sites have RSS feeds. Usually you can use their home page and the RSS app will find the correct feed URL for you. You can also subscribe to a YouTube channel or podcast.

Don't rely on getting all your info from social media sites. Don't just read whatever's in your Facebook, Twitter or Reddit feed. Choose and subscribe to some sites yourself.


Elliot Temple | Permalink | Messages (2)