Capitalism

I have posted some comments about capitalism here and in some of the other recent posts at that blog.

Elliot Temple | Permalink | Messages (0)

Information Flow

Suppose we are programming a game and want a hero to have a multishot spell that shoots 20 arrows at once. We decide a monster can only be hit by one of the arrows per casting of the spell.

You might expect we could just have the spell itself keep a list of monsters it has hit so far, and check the list when an arrow hits a monster to see if that monster has already been hit. And that is indeed possible. However, having a bunch of code that controls a spell with a central data structure that every aspect of the spell reports back to is not a very good model. It leads to confusing, hard-to-change code.

A different approach would be that the multishot spell creates 20 arrows, and gives them initial velocity, and then that's it, the spell is gone. The arrows are now all separate. There are two ways to avoid redundant hits now. Either the arrows can have a list of all the other arrows and send a message when they hit something so the others know not to damage it (alternatively arrows could have access to everything and then search through everything for the other arrows from the spell. As long as they are all uniquely marked as coming from that spell they could be found that way). Or option two is to have the monster keep track of what it's been hit by. Then when an arrow hits a monster, it can check that monster's list of multi-shot spells that already hit it to see if it should do damage.

This is interesting due to parallels with capitalism and with physics.

The capitalism parallel is that autonomous, smart agents are a better model than central control. Programmers have known this for decades (at least Lisp programmers!), and write lots of papers about it. Here is a paper on dividing problems up into smaller discrete tasks, with detailed examples, which shows how this makes programs easier to modify. It explicitly criticises trying to code single, large functions that keep track of everything, and criticises programming languages that encourage or require that. Similarly, capitalists know (and have since before Lisp existed) that central authorities don't work as well as distributed decision making. Another point in the Lisp paper I linked is that lazy evaluation making is very valuable. That means only calculating things when they are about to be used to prevent doing unnecessary calculations that might not be used. Similarly, when people make their own choices, they can frequently do it at the last moment, and they can avoid deciding things that become irrelevant. When central planners try to plan, they have to, in order to have time to tell everyone what to do, plan way in advance, so they end up calculating lots of things that, it turns out, don't matter.

The parallel with physics is that a huge amount of mysticism can be detected and refuted by a detailed analysis of information flow. For example, suppose someone claimed a certain arrow would only hurt you if you hadn't already been hit by another arrow from the same group. We would know there is no overall central control mechanism (located at the bow?) that arrows report back to (what do they report back with? light? we'd notice, and light has limited speed so it wouldn't work if the arrows went too fast). We also know arrows don't pass messages to each other about what targets they've hit (not only are arrows unable to identify what people they've hit, they don't have anything to send or receive messages with). And we know the model of the monster keeping a list of which arrows it was hit by wouldn't work either (that involved the arrow, on hitting something, checking the list, but arrows cannot do that. how would it read the list? compute whether it was on the list? also arrows aren't marked by what volley they were fired in). So we can call anyone who believes in a real multishot spell of this sort a mystic (after we ask his explanation, and it turns out he has no explanation of how his idea is possible within the laws of physics).

This arguing technique applies to a lot more than magic spells. Suppose someone said he spoke to God. We might well ask how the information got from God to him. If God communicated with light or sound it could be recorded with a video camera, and he'd need a convincing reason to think it wasn't just a natural process (there is a lot of light bouncing around. how do you know this light bounced off God?). People who believe in telepathy never explain how thoughts travel between brains, nor what they sense thoughts with (eyes? neurons?). Do psychics who do phone readings claim that reading thoughts is possible from many miles away? If it is, why can't they read the thoughts of people they aren't on the phone with? Do thoughts travel through telephone wires? Of course, phone psychics in fact just don't bother to address the issue at all. It'd be very amusing if they were asked questions like this more often. Some would be foolish enough to attempt to answer some of the questions. It's pretty hard to refuse to say the range of one's psychic powers. But it'd also be pretty embarrassing to claim telephones are a psychic amplifier. And if it's someone's voice that matters, why won't a recording do? And after the psychic says it recordings don't work, trick one and do an entire phone reading by playing pre-recorded sound bites over the phone and then ask why the psychic didn't notice he wasn't talking to a person (shouldn't his powers have not worked?).

It works on a lot of bad philosophy too. Imagine someone says that meaning is assigned to objects by humans, and can exist no other way. That is nonsense. The first thing to do is ask whether its possible to think about something before assigning it a meaning. If that's possible, begin asking about what difference it makes to human thinking whether a meaning has been assigned or not. What specifically, if anything, is impossible before a meaning is assigned? Why does assigning a meaning change that?

The more interesting case is when the person says that thinking about something that your brain hasn't yet assigned a meaning is impossible. Next he will say that meaning is assigned immediately when a person first encounters something, and not before, and not after. One issue is this needs to be done instantly, so that stray thoughts don't try to think about the object before the meaning is assigned. Thinking instantly isn't possible because electrical impulses require time to move around. Further, when the assign-meaning function is called, it needs to know: A) what the object is B) what meaning to assign. All the information necessary to assign the meaning must be there before the meaning is assigned, and thus before the object has been thought about at all. That means all meanings are assigned without thinking about what they should be! But it gets worse. If Jack will respond to seeing a rock for the first time by assigning it "hard" (that's over-simplified), then we might say he already, right now, before interacting with his first rock ... has a worldview such that rocks will be assigned the meaning "hard". So what difference does it make if Jack assigns that meaning now or later? What's so special about the act of assigning when the result could have been worked out in advance? People may say Jack could change his worldview before seeing a rock. But that doesn't really change anything: as he changes his worldview, the implied meaning of rocks according to Jack changes as well. And what about assigning meaning to objects that don't exist anymore but that we've heard of? And objects that don't exist yet but will? Were cars meaningless until they were completely invented?

Elliot Temple | Permalink | Messages (2)

Elliot Temple | Permalink | Messages (0)

Lectures

I have downloaded some lectures about computer science to watch. Some are from University, and some aren't.

I noticed the following:

I pause fairly often, and sometimes watch other videos instead then go back, or if I want to have some other sound (music, person talking).

I also pause sometimes to read the blackboard or slide, or consider something the lecturer said. Also if I don't understand part, or have a question, I might go find out the answer before continuing. Also I might try doing one of his examples.

I also need to have the option to pause if I think of something cool or important that I want to write about before I forget.

I skip forward or back in the lectures sometimes.

So so far: parts of the lecture are too fast, parts are too slow, parts are boring, and I rarely want to hear it all in one sitting.

I multi-task a lot. I am writing this with a lecture on. I also burned DVDs, chatted with people on AIM, organised files better, and read news articles.

For especially interesting parts, I watch with my full attention, but for most parts I only pay half attention. Sometimes I stop listening and miss parts. Later, I might or might not go back to hear it.

Missing stuff is OK. It's not important to understand everything the lecturer says. Not all parts of a subject are best learned through a lecture. Some gaps in my knowledge will be much easier to fill in when writing code, or watching a different lecture, or reading a book, or talking to someone.

Missing stuff does not make it impossible to learn about the later things. There are a lot of ways to understand later concepts without the previous concepts. Often I can just assume some feature works the way he says it does, and then the later features make perfect sense. Often later concepts are separate from earlier ones (perhaps they are both building blocks relevant to the conclusion).

So overall: I like to have, and extensively use, control over when I hear what parts of the lecture. Sitting through an entire lecture at once, not doing other things, is never ideal. It's not important whether I get the main point of the lecture or not.

In conclusion: the format of school lectures may be hard to change due to the practical problems presented by having in-person lectures with many students at once. But they are far from ideal for learning.

Lecture Links (Lisp stuff):

Univ: http://swiss.csail.mit.edu/classes/6.001/abelson-sussman-lectures/

Not-Univ: http://www.iro.umontreal.ca/%7Eboucherd/mslug/meetings/20041020/minutes-en.html

Elliot Temple | Permalink | Messages (0)

Optimism

Paul Graham wrote:

Imagine if people in 1700 saw their lives the way we'd see them. It would have been unbearable. This denial is such a powerful force that, even when presented with possible solutions, people often prefer to believe they wouldn't work.
This is a very nice way to explain the issue, so I shall elaborate. People have, since the dawn of humanity, opposed new ideas that would reveal their lives as flawed and lacking and even miserable. This leaves two viewpoints we can take about the present: we are at the end of human progress and our lives have no serious flaws, or it is like 1700 and we are in denial about many problems.

Believing we are the best the Earth will ever offer goes against the facts. Everyone has problems. That's why it's possible to get a job as a psychotherapist or councilor. Saying there will be no more progress is really saying whatever problems we have now cannot be solved. Why say that? Because then our suffering isn't our fault. It might be possible to argue that some of our problems are insoluble, but certainly not all or most of them.

That leaves the other option: just like in 1700, we are in denial. I think this is broadly the case. When we say that temper tantrums are an inevitable part of parenting, that is not because there is no possible way to avoid fighting with our children, its because we don't want to see ourselves as failures. When we say children do bad things because they are children, that is avoiding facing the fact that we could have given better advice. (Some problems like that aren't foreseeable, but certainly some are.) When we say that "love hurts", we are denying that our own approach to relationships hurts us. When we divorce and insist vehemently that our partner is an evil bastard, we have to: if he wasn't a lying manipulator then it would have been possible to see the flaws in the relationship in advance. We didn't choose the wrong person, he tricked us! In all these examples we might be blameless, but sometimes there is something we could have done better, and assuming it might be partially our fault will help us find that out.

I want to move past this to a kinder view that expects mistakes and problems, and sees finding them as a positive step. We should feel good about discovering we were wrong: now we have a better shot at being right next time. Or if there won't be a next time for us, at least we could tell our children. And if it's important enough, we could write a book and tell the world.

We all have a lot of bad ideas. That's understandable. And it's excusable -- no, better than that: we don't need any excuse at all. But let's at least get one thing right: we aren't perfect. Most of the problems we face are caused by human mistakes. That's the most optimistic belief we can have because humans are capable of correcting their own mistakes.

Elliot Temple | Permalink | Messages (3)

Anti-Human Views

This is an interesting example of a site that is against people having power and control over what they do.

Mixed into a couple reasonable arguments, it mostly opposes the "nofollow" feature on links (which makes search engines not count them) because it lets people control who they give link credit to, so now they can sell it, or not give credit to sites they don't like.

The anti-nofollow people act like nofollow is dangerous. But no one has to use it. They are really against anyone who wants to use nofollow having the option to. In other words, they want to control people they disagree with. They advocate the world being such a way no one can do anything but enact their theory of how to live.

Usually anti-human-power views are associated with, say, gun control advocates or people who hate technology. But I think it's very widespread. Any sort of authoritarian view that does not want to allow people to make their own choices is anti-human.

Elliot Temple | Permalink | Messages (0)

Google and the Anti-Capitalism of the Right

Blogs are slamming Google for cooperating with China.

I am saddened and dismayed to see anti-capitalist and anti-corporate rhetoric, especially from right-of-center blogs. Making a mistake is one thing; maybe Google did. But assuming the cause must be the profit motive is anti-capitalist. There are many ways to make a mistake that are not about greed.

I have not seen, in a single post, any actual evidence that Google is doing this out of greed. No arguments explaining why the profit motive causes mistakes. No quotes from Google executives advocating greed. No calculations about how much money Google will make by this decision, and whether that is enough to cause corruption. No discussion of whether this is profitable at all (generating negative publicity is bad for business). No explanations of why people with good ideals would turn to evil beyond assertions that money is a force for evil. The big fuss is, I have to say, nothing but unreflective calumny that one would normally expect only from very silly lefties.

I don't know if Google's cooperation is a mistake or not. I (and other bloggers) do not have the necessary inside information to accurately judge just what options Google had and exactly why it chose this. Guessing that Google is a sinful capitalist company may be fun, but it doesn't tell us why this happened or whether it was the right decision.

There are dual sins at work here. First we have the debate tactic of saying the people we disagree with have immoral motives (while failing to acknowledge their actual position). Second, we have the profit motive as the evil motive of choice.

Here is an example of an unfair headline:

Don't Be Evil - Unless It's Profitable
The Conservative Voice
The right wing anti-capitalist pieces don't seriously argue their position. What could they say? That US corporations are greedy and corrupt and if only we weren't capitalist we could live in freedom, with no censorship?

This piece calls Google evil, and suggests that caring about business may entice Google more deeply into evil. It suggests Google plans to notify users when search results are blocked, but it asserts that is only worth brownie points and makes Google a little less evil. It goes on to say:

They say that they will have a link somewhere on the Google.cn page enabling users to access the U.S.-hosted version at: http://www.google.com/ig?hl=zh-CN. So that Chinese users who prefer can opt for the pre-Google.cn experience.
but doesn't believe this to absolve Google of evil because it might not be displayed prominently enough. Evil is the premise, not the conclusion.

Here is Google's explanation of its decision.

Edit: "Very silly lefties" links to the Democratic Underground. If you don't believe in conspiracy theories, that doesn't apply to you.

Elliot Temple | Permalink | Messages (11)

There Are No Shortcuts To Knowledge

At breakup, people realise, "he never knew me at all". Why were they fooled before? It's because he was running functions like this:

define-action "care": (lookup-and-say: "conventional-way-to-care")

instead of like this:

define-action "care": (lookup-and-say: "what-partner-cares-about")

So as long as the couple is roughly conventional, things seem to work. They seem to have instant knowledge of each other. But they don't actually have knowledge of each other, and that is revealed when they get into more subtle parts of their personalities and find differences from both convention and each other.

The second function will respond to the partner changing. The first will not and is thus deeply impersonal.

Elliot Temple | Permalink | Messages (0)