All animals face the challenge of making inferences about current and future states of the world from uncertain sensory information. One might think that animals would perform better in such tasks by using more complex algorithms and models to extract and process pertinent information. But, in fact, theory predicts circumstances where simpler models of the world are more effective than complex ones, even if the latter more closely approximates the truth. Using information theory, we demonstrate this point in two ways. First, we show that when data is sparse or noisy, less complex inferred models give better predictions for the future. In this form of Occam's razor, a model family is more complex if it has more parameters, describes a greater number of distinguishable models, or is more sensitive in its parameter dependence. Second, even in situations where complex models give better predictions, cognitive and computational costs typically grow with complexity, subjecting the models to a law of diminishing returns. To conclude, we present experimental results showing that human inference behavior matches our theoretical predictions.