For a given model, the sum of squared errors is calculated as $$ SS_{res} = \sum_{i=0}^n (y_i - \hat{y_i})^2 $$
For a model where output is always the average value of $y$ is $$ SS_{tot} = \sum_{i=0}^n (y_i - y_{avg})^2 $$
R Squared is defined as $$ R^2 = 1 - \frac{SS_{res}}{SS_{tot}} $$
Hypothesis: $R^2$ will never decrease
When you have a model with $n$ variables, the model will try to minimise the error. When you add $n + 1$ th variable, the model will try to minimise the error by assigning it a valid coefficient. If it fails to do so, i.e. if the new variable isn't helping at all, it will simply assign it a coefficient of 0. Hence, $R^2$ will never decrease.
So, the solution is to use adjusted $R^2$ which is given by
$$ R^2_{adj} = 1 - (1 - R^2) \frac{n - 1}{n - p - 1}$$Where, p = number of Regressors (independent variables) n = sample size
So basically, it penalizes for the number of variables you use. It is a battle between increase in $R^2$ vs the penalization brought by adding the additional variable
Just because the coefficient of a variable is high, it doesn't mean it is more corelated. We should look at the units while interpreting coefficient. Best way to do it is look at the change for a unit change. For instance, if the coefficient is 0.79 we can say, for a unit change i.e. for an additional dollar added into the column, the profit will increase by 79 cents