There seems to be a general misconception that Bayesian methods are harder to implement than Frequentist ones. Sometimes this is true, but more often existing R and Python libraries can help simplify the process.

Simpler to implement ≠ throw in some data and see what sticks. (We already have machine learning for that. :P)

People make Bayesian methods sound more complex than they are, mostly because there’s a lot of jargon involved (e.g. _weak priors, posterior predictive distributions, _etc.) which isn’t intuitive unless you have previously worked with these methods.

Adding more to that misconception is a clash of ideologies — “Frequentist vs. Bayesian.” (If you weren’t aware, well, now you know.) The problem is people, especially statisticians, commonly quarrel over which methods are more powerful when in fact the answer is “it depends.”

Bayesian methods, like any others, are just tools at our disposal. They have advantages and disadvantages.

So, with some personal “hot takes” out of the way, let’s move on to the fun stuff and implement some Bayesian linear mixed (LMM) models!

#regression #r #bayesian-statistics #editors-pick #python

A Bayesian Approach to Linear Mixed Models (LMM) in R/Python
5.40 GEEK