For those that are reading this and have followed my post in the past, you may call me out as being a hypocrite. But, if you don’t know what I am talking about then you may view any article in my Algorithms from scratch series:
I was reading through an article that used coded examples with Medium’s code cells. I got so frustrated! I ended up copying and pasting the code into my visual studio environment and even this was wasn’t enough to save the code — There were no indentations so I basically had to clean the code myself.
If it wasn’t urgent that I understood that topic, I’d of probably clicked off that article and searched for a new one. It was a gruesome experience to say the least.
def plot_svm():
## https://scikit-learn.org/stable/auto_examples/svm/plot_separating_hyperplane.html
## getting the decision function
plt.figure(figsize=(10, 5))
decision_function = svc.decision_function(X)
support_vector_indices = np.where((2 * y - 1) * decision_function <= 1)[0]
support_vectors = X.iloc[support_vector_indices]
## plot observations
plt.scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, cmap=plt.cm.Paired)
## plot the decision function
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
## creating the grid to evaluate the model
xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 50),
np.linspace(ylim[0], ylim[1], 50))
Z = svc.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
## plot decision boundaries and margins
plt.contour(xx, yy, Z, colors='k', levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
## plot support vectors
plt.scatter(support_vectors.iloc[:, 0], support_vectors.iloc[:, 1], s=100,
linewidth=1, facecolors='none', edgecolors='k')
plt.title("Linear SVM (Hard Margin Classification)")
plt.tight_layout()
plt.show()
I don’t know if it is just me, but I find this extremely difficult to read. There is a way to format code in Medium’s code cells to make it appear more readable as I’ve done below…
def gradient_descent(X, y, params, alpha, n_iter):
"""
Gradient descent to minimize cost function
__________________
Input(s)
X: Training data
y: Labels
params: Dictionary contatining random coefficients
alpha: Model learning rate
__________________
Output(s)
params: Dictionary containing optimized coefficients
"""
W = params["W"]
b = params["b"]
m = X.shape[0] ## number of training instances
for _ in range(n_iter):
## prediction with random weights
y_pred = np.dot(X, W) + b
## taking the partial derivative of coefficients
dW = (2/m) * np.dot(X.T, (y_pred - y))
db = (2/m) * np.sum(y_pred - y)
## updates to coefficients
W -= alpha * dW
b -= alpha * db
params["W"] = W
params["b"] = b
return params
If you noticed, I’ve manually highlighted the Python built-ins in bold font and I’ve indented everything so it is pretty readable — I must have had the time that day because if you are writing a full technical blog, as I had done for this one, to do this to each bit of code you write is very, very, very long.
A better strategy is to use Github Gist — For the veterans on here, you’d know this is far from some revolutionary discovery. I am going to use the first code cell (with difficult to read code) and create a Github Gist.
I am not a Math genius, but I am sure we can all agree that that is 3000 times much clearer than the Medium code cell we saw earlier. And to be honest, it makes sense, why…
#coding #programming #opinion #advice #technical-writing