This article is wrong. Regularization is always preferred for logistic regression and should be the default for any real use case. Suppose your data is perfectly linearly separated: when logistic regression sees that it is assigning 96% probability to a given data point, and it could assign 99% (without misclassifying any other points), it will. Then it will try to assign 99.5%, and so on, until you have infinitely large coefficients (in high dimensions this is very possible). Statsmodels’s defaults assume the least, yes, but statsmodels also won’t automatically do things like adding an intercept to OLS, because it is mostly a backend for other libraries interfaces.
EDIT: oops, funklute said this an hour ago and apparently got downvoted. He is correct
EDIT: oops, funklute said this an hour ago and apparently got downvoted. He is correct