To understand the tradeoff, we first need to recall what bias and variance are:
The bias dictates how easy it for a model is to learn complex decision functions. That is, make predictions on "tricky" data.
Low bias - the model is sufficiently complex to not make oversimplification assumptions. Some of the models that have low bias are - Neural Networks, Random Forests, Support Vector Machines
High bias - the model has some assumptions that need to be true to make good predictions. High bias models don't learn from the training data and have high error on training and test data (underfitting). Some models with high bias are: Linear Regression and Logistic Regression
Variance is the error caused by small fluctuations in the training data.
The tradeoff is that by manipulating one of the components, you'll affect the other. Pick the Linear Regression model to have low variance results in high bias. But you can use these concepts to create a practical solution that works.