Here is a question for people who think Bayes' theorem holds answers for epistemology. Suppose we have a coin. We estimate the prior probabilities of heads and tails at 50% each. We flip the coin 5,000 times. They all come up tails. Now we want to update our estimates of the probabilities of heads and tails. If we flip it again, what should we estimate the chance of another tails is?
This is a very generous question. Choosing prior probabilities is itself a serious issue, but we grant that. 5,000 data points, all with precise, unambiguous results, is not common in daily life. Plus the data can be summed up nicely and has a strong, easy to analyze pattern. And coin flipping is especially suited to a Bayesian approach. It's just as generous as a problem about picking different colored marbles out of a bag. And I don't ask for an explanation, only for a new probability estimate, which is again what Bayes is all about.
But I don't think Bayesians can answer this (or any harder question). If one tries, here is what you say to them next: "Would you agree that some parts of what you just said are not implied by Bayes' theorem, but are extra things you've added?" When they agree they've stepped a little beyond the bounds of the formula itself, then you can ask them about how much of their procedure for answering the question is Bayes' formula. And ask about where this extra part is coming from, and where to find a rigorous statement of how it works, and so on.
Now, here is a scenario for inductivists. I have a Rails application with a memory leak. I want to find the leak and fix it. How do I do it? You have a theory of knowledge, which is supposed to (along with deduction) explain how knowledge is created, right? So tell me how to create knowledge of my memory leak. Tell me how to solve a real problem.
I can repeat the test code which causes the leak thousands of times if you want. And I can run code that doesn't cause the leak thousands of times. You can have all the repeated observations you want. But I don't see how that will help. Tell me, Mr. Inductivist, how will repeating these observations help anything? Should I get different Rails applications, perhaps thousands of them, and see if they leak memory? I can do that, but is it really going to figure out where the bug in my program is?
Here is how I actually find memory leaks. I make guesses about where the problem might be, and then I think of ways to test whether I'm right or wrong. For example I guess it's in a certain section of code, then I delete that section and run the application and see if the leak goes away or not. Just like Popper said: guesses and criticism, trial and error.
I also run some programs to get statistics. What statistics? The ones I guess might be useful, such as a list of the most numerous objects in memory. How do I get from this list to figuring out which code is to blame? Sometimes I don't. Other times I think "Oh, lots of widgets, well I think I know where we create a lot of widgets" and I come up with a guess about which code is probably making them. None of this follows the inductivist model where you make repeated observations (of what? Just run the same exact thing over and over? If not, then how do you decide what to observe?) and then infer the answer from the observations (so i observe the leak every day for 3 years, and write down what happened each time, and then somehow I infer from this what the problem is? That "somehow" is very vague. That's where induction falls flat.)
One of the general patterns this post illustrates is that bad philosophy can be dealt with by asking it to be effective. Asking to see it in action. Even just in simple, realistic examples.
See also:
Popper on Bayesians