The loss function used in regression is called mean squared error
where:
Incorporating that loss function into the cost function
Code
import numpy as np def compute_cost(X, y, w, b): """ compute cost Args: X (ndarray (m,n)): Data, m examples with n features y (ndarray (m,)) : target values w (ndarray (n,)) : model parameters b (scalar) : model parameter Returns: cost (scalar): cost """ m = X.shape[0] cost = 0.0 for i in range(m): f_wb_i = np.dot(X[i], w) + b #(n,)(n,) = scalar (see np.dot) cost = cost + (f_wb_i - y[i])**2 #scalar cost = cost / (2 * m) #scalar return cost cost = compute_cost(X_train, y_train, w_init, b_init)