Occam's razor (roughly) states that:
Given two hypotheses that make equivalent predictions, we should use the simpler one
Regularization encourages simpler model creation. Some methods penalize large parameter values (no single parameter should be that important) while others completely zero them out (remove them).
Both regularization methods presented below modify the loss function to include a regularization term. That is, you can use those regardless of the loss function you choose.
L1 regularization can force model parameter values to 0. In effect, this can eliminate some of the features/dimensions of your data. Also, your model becomes easier to interpret due to the reduced number of parameters.
L1 regularization works by adding the following term to the "original loss function":
Where is a user-defined (hyper) parameter.
The main difference between L1 and L2 regularization is that L2 doesn't enforce zeroes for parameter values. It does this by squaring the parameter values (instead of taking the absolute values). Here is the definition:
When using Ridge regularization, the final model will include all parameters, making it harder to interpret.
Choosing a value for the parameter is dependant on the model and training data you have.