Both are correct but they target different things. The disagreement is around what is the target should be and the advantages and disadvantages of choosing these targets. Bayesians are interested in p(unknown|data) and frequentists are interested in p(data|unknown = H0). Inference can be framed either way but means different things.
Are there any situations where you want to use a frequentist procedure?
I've concluded that given a perfect, infinite-power MCMC simulator, I would always do a Gelman-style Bayesian analysis (with model falsification and improvement), but in practice, frequentist methods are computationally convenient.
Inference can be framed either way but means different things.
A Bayesian posterior P(H|D,M) is the probability that hypothesis H is true given data D and modelling assumptions M.
Sure, see my link above (http://stats.stackexchange.com/a/2287/1122). If you want to put an upper bound on the worst-case probability of making a mistake, you use a p-value. If you want to express the conditional probability of a particular hypothesis given the observation (and given a prior belief), you use a posterior probability. The Bayesians also can do silly things (see the cookie example with the inept Bayesian robots). In the end there is no free lunch.
The frequentist p-value is about H0, not (directly) the hypothesis you are testing. More specifically, it denotes the probability of rejecting H0, even though it's true.
They are both models and as such, you might consider that neither of them are "correct." But they are both useful, sometimes in different circumstances.
"Essentially, all models are wrong, but some are useful." — George Box