1624392420
Do you remember the first time you touched the computer keyboard?
Do you remember the typewriter clanging sound?
Do you remember your first HTML rendered on the world wide web or your first “hello world” application?
Maybe you were a gamer, and you blew the cartridges on your family computer?
No doubt, your first lines of code rely on the combined outcomes of thousands of years of accumulated knowledge and wisdom.
I’m your host Kassandra Russel and today, we will navigate through history to discover how our ancestors made knowledge out of information. We’ll talk about the technologies that have marked contemporary and modern history. Then we’ll go back to modern times to talk about the first web server, virtualization, cloud computing, Docker, and Kubernetes. This is the tale of computing, in which humans are heroes, and their greatest weapon is the imagination.
#kubernetes
1598770012
We learn and know (hopefully) a basic history of the world, particularly major events like the French revolution, the American Civil War, World War I, World War II (wow lots of wars), the Spaceage etc. It is important to understand the concepts of these and many other historical events. Being able to recall the start year or the exact details of how such events unfolded is one thing, but on a human level, it is more important to understand the rationale, lessons and philosophy of major events. Ultimately history teaches us what makes us innately human. Furthermore, understanding history helps us realise the how and why we operate in today. History provides the context for today. It makes today seem ‘obvious,’ ‘justifable’ and ‘logical’ given the previous events that unfolded.
So, following this thread of logic, understanding the history of computers should help us understand how we have got to today. A today when computers moderate much of our communication with one another. A today where computers and screens are stared at for many (and often a majority) of our waking hours (especially during Covid). A today where the thought of working, socialising or learning without a computer would be an affront and a disadvantage. Just as major events like World War II and the Cold War have greatly contributed to today’s political and social climate. I would argue computers influence just as much (if not more) of our daily lives.
I would like to preface that the following articles outlining the history of computers by saying this is in no way an exhaustive history of the origin of computers. Some major events have been glossed over while other meaningful contributions omitted entirely.
Whilst the thought of history for some may make the eyes automatically glisten over, I will try and make the following series as painless and exciting as possible. While I paint a story of linear progress of computation, this is hindsight bias in action. We like to create a story of history attributing certain importance to some events and not others when in reality as these events were unfolding (and continue to unfold) it was not always obvious what was a gigantic discovery. It is only now with some distance that we can appreciate past event. This means perhaps in ten years this recount will emphasis other features and neglect some of the stories today we find so foundational to computer’s creation.
With all this in mind let’s begin !!
Since their inception computers have taken over human work by performing tedious, complex and repetitive tasks. Interestingly, t_he__ word computer initially described humans_!! Initially computers were humans (often women) who were able to perform complex mathematical computations — usually with pen and paper. Often teams would work on the same calculation independently to confirm the end results. It is interesting to note that initally when electronic computers were developed they were referred to as such — electronic computers. With time as electronic computers became more and more pervasive and powerful, it became the human computer that was deemed obsolete and inefficient. The electronic was dropped and now when we discuss computers we think of nothing else besides our gracefull and versalite electronic tools. It is important to keep computer’s mathematical origin in mind as we will see it only further emphasises the never imagined pervasiveness and uses of computers today.
Our story begins with the humble abacus, generally considered the first computer. When researching I was puzzled how an abacus could be considered a computer. Luckily my curiosity was settled by a quick Google search (thank you Google). Google was even able to suggest my search before I completed typing ‘Why is the abacus considered the first computer’! I ended up on trusty Quora where one users: Chrissie Nysen put things simply-“Because it is used to compute things.” Though estimates vary, the abacus is thought to originate in Babylon approximately 5000 years ago. The role of the abacus was to ease in simple mathematical calculations- addition, subtraction, division and multiplication. In this sense, we can consider the abacus as a simple calculator. As farming, produce and populations increased in size, the abacus allowed the educated to more easily manage logistics. After the abacus the first computer, computer’s evolution remained dormant for some time……
#history #computer-science #history-of-technology #computers #computer-history #data science
1595547778
Developing a mobile application can often be more challenging than it seems at first glance. Whether you’re a developer, UI designer, project lead or CEO of a mobile-based startup, writing good project briefs prior to development is pivotal. According to Tech Jury, 87% of smartphone users spend time exclusively on mobile apps, with 18-24-year-olds spending 66% of total digital time on mobile apps. Of that, 89% of the time is spent on just 18 apps depending on individual users’ preferences, making proper app planning crucial for success.
Today’s audiences know what they want and don’t want in their mobile apps, encouraging teams to carefully write their project plans before they approach development. But how do you properly write a mobile app development brief without sacrificing your vision and staying within the initial budget? Why should you do so in the first place? Let’s discuss that and more in greater detail.
It’s worth discussing the significance of mobile app project briefs before we tackle the writing process itself. In practice, a project brief is used as a reference tool for developers to remain focused on the client’s deliverables. Approaching the development process without written and approved documentation can lead to drastic, last-minute changes, misunderstanding, as well as a loss of resources and brand reputation.
For example, developing a mobile app that filters restaurants based on food type, such as Happy Cow, means that developers should stay focused on it. Knowing that such and such features, UI elements, and API are necessary will help team members collaborate better in order to meet certain expectations. Whether you develop an app under your brand’s banner or outsource coding and design services to would-be clients, briefs can provide you with several benefits:
Depending on how “open” your project is to the public, you will want to write a detailed section about who the developers are. Elements such as company name, address, project lead, project title, as well as contact information, should be included in this introductory segment. Regardless of whether you build an in-house app or outsource developers to a client, this section is used for easy document storage and access.
#android app #ios app #minimum viable product (mvp) #mobile app development #web development #how do you write a project design #how to write a brief #how to write a project summary #how to write project summary #program brief example #project brief #project brief example #project brief template #project proposal brief #simple project brief template
1641805837
The final objective is to estimate the cost of a certain house in a Boston suburb. In 1970, the Boston Standard Metropolitan Statistical Area provided the information. To examine and modify the data, we will use several techniques such as data pre-processing and feature engineering. After that, we'll apply a statistical model like regression model to anticipate and monitor the real estate market.
Project Outline:
Before using a statistical model, the EDA is a good step to go through in order to:
# Import the libraries #Dataframe/Numerical libraries import pandas as pd import numpy as np #Data visualization import plotly.express as px import matplotlib import matplotlib.pyplot as plt import seaborn as sns #Machine learning model from sklearn.linear_model import LinearRegression
#Reading the data path='./housing.csv' housing_df=pd.read_csv(path,header=None,delim_whitespace=True)
CRIM | ZN | INDUS | CHAS | NOX | RM | AGE | DIS | RAD | TAX | PTRATIO | B | LSTAT | MEDV | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.00632 | 18.0 | 2.31 | 0 | 0.538 | 6.575 | 65.2 | 4.0900 | 1 | 296.0 | 15.3 | 396.90 | 4.98 | 24.0 |
1 | 0.02731 | 0.0 | 7.07 | 0 | 0.469 | 6.421 | 78.9 | 4.9671 | 2 | 242.0 | 17.8 | 396.90 | 9.14 | 21.6 |
2 | 0.02729 | 0.0 | 7.07 | 0 | 0.469 | 7.185 | 61.1 | 4.9671 | 2 | 242.0 | 17.8 | 392.83 | 4.03 | 34.7 |
3 | 0.03237 | 0.0 | 2.18 | 0 | 0.458 | 6.998 | 45.8 | 6.0622 | 3 | 222.0 | 18.7 | 394.63 | 2.94 | 33.4 |
4 | 0.06905 | 0.0 | 2.18 | 0 | 0.458 | 7.147 | 54.2 | 6.0622 | 3 | 222.0 | 18.7 | 396.90 | 5.33 | 36.2 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
501 | 0.06263 | 0.0 | 11.93 | 0 | 0.573 | 6.593 | 69.1 | 2.4786 | 1 | 273.0 | 21.0 | 391.99 | 9.67 | 22.4 |
502 | 0.04527 | 0.0 | 11.93 | 0 | 0.573 | 6.120 | 76.7 | 2.2875 | 1 | 273.0 | 21.0 | 396.90 | 9.08 | 20.6 |
503 | 0.06076 | 0.0 | 11.93 | 0 | 0.573 | 6.976 | 91.0 | 2.1675 | 1 | 273.0 | 21.0 | 396.90 | 5.64 | 23.9 |
504 | 0.10959 | 0.0 | 11.93 | 0 | 0.573 | 6.794 | 89.3 | 2.3889 | 1 | 273.0 | 21.0 | 393.45 | 6.48 | 22.0 |
505 | 0.04741 | 0.0 | 11.93 | 0 | 0.573 | 6.030 | 80.8 | 2.5050 | 1 | 273.0 | 21.0 | 396.90 | 7.88 | 11.9 |
Crime: It refers to a town's per capita crime rate.
ZN: It is the percentage of residential land allocated for 25,000 square feet.
Indus: The amount of non-retail business lands per town is referred to as the indus.
CHAS: CHAS denotes whether or not the land is surrounded by a river.
NOX: The NOX stands for nitric oxide content (part per 10m)
RM: The average number of rooms per home is referred to as RM.
AGE: The percentage of owner-occupied housing built before 1940 is referred to as AGE.
DIS: Weighted distance to five Boston employment centers are referred to as dis.
RAD: Accessibility to radial highways index
TAX: The TAX columns denote the rate of full-value property taxes per $10,000 dollars.
B: B=1000(Bk — 0.63)2 is the outcome of the equation, where Bk is the proportion of blacks in each town.
PTRATIO: It refers to the student-to-teacher ratio in each community.
LSTAT: It refers to the population's lower socioeconomic status.
MEDV: It refers to the 1000-dollar median value of owner-occupied residences.
# Check if there is any missing values. housing_df.isna().sum() CRIM 0 ZN 0 INDUS 0 CHAS 0 NOX 0 RM 0 AGE 0 DIS 0 RAD 0 TAX 0 PTRATIO 0 B 0 LSTAT 0 MEDV 0 dtype: int64
No missing values are found
We examine our data's mean, standard deviation, and percentiles.
housing_df.describe()
CRIM | ZN | INDUS | CHAS | NOX | RM | AGE | DIS | RAD | TAX | PTRATIO | B | LSTAT | MEDV | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 |
mean | 3.613524 | 11.363636 | 11.136779 | 0.069170 | 0.554695 | 6.284634 | 68.574901 | 3.795043 | 9.549407 | 408.237154 | 18.455534 | 356.674032 | 12.653063 | 22.532806 |
std | 8.601545 | 23.322453 | 6.860353 | 0.253994 | 0.115878 | 0.702617 | 28.148861 | 2.105710 | 8.707259 | 168.537116 | 2.164946 | 91.294864 | 7.141062 | 9.197104 |
min | 0.006320 | 0.000000 | 0.460000 | 0.000000 | 0.385000 | 3.561000 | 2.900000 | 1.129600 | 1.000000 | 187.000000 | 12.600000 | 0.320000 | 1.730000 | 5.000000 |
25% | 0.082045 | 0.000000 | 5.190000 | 0.000000 | 0.449000 | 5.885500 | 45.025000 | 2.100175 | 4.000000 | 279.000000 | 17.400000 | 375.377500 | 6.950000 | 17.025000 |
50% | 0.256510 | 0.000000 | 9.690000 | 0.000000 | 0.538000 | 6.208500 | 77.500000 | 3.207450 | 5.000000 | 330.000000 | 19.050000 | 391.440000 | 11.360000 | 21.200000 |
75% | 3.677083 | 12.500000 | 18.100000 | 0.000000 | 0.624000 | 6.623500 | 94.075000 | 5.188425 | 24.000000 | 666.000000 | 20.200000 | 396.225000 | 16.955000 | 25.000000 |
max | 88.976200 | 100.000000 | 27.740000 | 1.000000 | 0.871000 | 8.780000 | 100.000000 | 12.126500 | 24.000000 | 711.000000 | 22.000000 | 396.900000 | 37.970000 | 50.000000 |
The crime, area, sector, nitric oxides, 'B' appear to have multiple outliers at first look because the minimum and maximum values are so far apart. In the Age columns, the mean and the Q2(50 percentile) do not match.
We might double-check it by examining the distribution of each column.
Because the model is overly generic, removing all outliers will underfit it. Keeping all outliers causes the model to overfit and become excessively accurate. The data's noise will be learned.
The approach is to establish a happy medium that prevents the model from becoming overly precise. When faced with a new set of data, however, they generalise well.
We'll keep numbers below 600 because there's a huge anomaly in the TAX column around 600.
new_df=housing_df[housing_df['TAX']<600]
The overall distribution, particularly the TAX, PTRATIO, and RAD, has improved slightly.
Perfect correlation is denoted by the clear values. The medium correlation between the columns is represented by the reds, while the negative correlation is represented by the black.
With a value of 0.89, we can see that 'MEDV', which is the medium price we wish to anticipate, is substantially connected with the number of rooms 'RM'. The proportion of black people in area 'B' with a value of 0.19 is followed by the residential land 'ZN' with a value of 0.32 and the percentage of black people in area 'ZN' with a value of 0.32.
The metrics that are most connected with price will be plotted.
Gradient descent is aided by feature scaling, which ensures that all features are on the same scale. It makes locating the local optimum much easier.
Mean standardization is one strategy to employ. It substitutes (target-mean) for the target to ensure that the feature has a mean of nearly zero.
def standard(X): '''Standard makes the feature 'X' have a zero mean''' mu=np.mean(X) #mean std=np.std(X) #standard deviation sta=(X-mu)/std # mean normalization return mu,std,sta mu,std,sta=standard(X) X=sta X
CRIM | ZN | INDUS | CHAS | NOX | RM | AGE | DIS | RAD | TAX | PTRATIO | B | LSTAT | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | -0.609129 | 0.092792 | -1.019125 | -0.280976 | 0.258670 | 0.279135 | 0.162095 | -0.167660 | -2.105767 | -0.235130 | -1.136863 | 0.401318 | -0.933659 |
1 | -0.575698 | -0.598153 | -0.225291 | -0.280976 | -0.423795 | 0.049252 | 0.648266 | 0.250975 | -1.496334 | -1.032339 | -0.004175 | 0.401318 | -0.219350 |
2 | -0.575730 | -0.598153 | -0.225291 | -0.280976 | -0.423795 | 1.189708 | 0.016599 | 0.250975 | -1.496334 | -1.032339 | -0.004175 | 0.298315 | -1.096782 |
3 | -0.567639 | -0.598153 | -1.040806 | -0.280976 | -0.532594 | 0.910565 | -0.526350 | 0.773661 | -0.886900 | -1.327601 | 0.403593 | 0.343869 | -1.283945 |
4 | -0.509220 | -0.598153 | -1.040806 | -0.280976 | -0.532594 | 1.132984 | -0.228261 | 0.773661 | -0.886900 | -1.327601 | 0.403593 | 0.401318 | -0.873561 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
501 | -0.519445 | -0.598153 | 0.585220 | -0.280976 | 0.604848 | 0.306004 | 0.300494 | -0.936773 | -2.105767 | -0.574682 | 1.445666 | 0.277056 | -0.128344 |
502 | -0.547094 | -0.598153 | 0.585220 | -0.280976 | 0.604848 | -0.400063 | 0.570195 | -1.027984 | -2.105767 | -0.574682 | 1.445666 | 0.401318 | -0.229652 |
503 | -0.522423 | -0.598153 | 0.585220 | -0.280976 | 0.604848 | 0.877725 | 1.077657 | -1.085260 | -2.105767 | -0.574682 | 1.445666 | 0.401318 | -0.820331 |
504 | -0.444652 | -0.598153 | 0.585220 | -0.280976 | 0.604848 | 0.606046 | 1.017329 | -0.979587 | -2.105767 | -0.574682 | 1.445666 | 0.314006 | -0.676095 |
505 | -0.543685 | -0.598153 | 0.585220 | -0.280976 | 0.604848 | -0.534410 | 0.715691 | -0.924173 | -2.105767 | -0.574682 | 1.445666 | 0.401318 | -0.435703 |
For the sake of the project, we'll apply linear regression.
Typically, we run numerous models and select the best one based on a particular criterion.
Linear regression is a sort of supervised learning model in which the response is continuous, as it relates to machine learning.
Form of Linear Regression
y= θX+θ1 or y= θ1+X1θ2 +X2θ3 + X3θ4
y is the target you will be predicting
0 is the coefficient
x is the input
We will Sklearn to develop and train the model
#Import the libraries to train the model from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression
Allow us to utilise the train/test method to learn a part of the data on one set and predict using another set using the train/test approach.
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.4) #Create and Train the model model=LinearRegression().fit(X_train,y_train) #Generate prediction predictions_test=model.predict(X_test) #Compute loss to evaluate the model coefficient= model.coef_ intercept=model.intercept_ print(coefficient,intercept) [7.22218258] 24.66379606613584
In this example, you will learn the model using below hypothesis:
Price= 24.85 + 7.18* Room
It is interpreted as:
For a decided price of a house:
A 7.18-unit increase in the price is connected with a growth in the number of rooms.
As a side note, this is an association, not a cause!
You will need a metric to determine whether our hypothesis was right. The RMSE approach will be used.
Root Means Square Error (RMSE) is defined as the square root of the mean of square error. The difference between the true and anticipated numbers called the error. It's popular because it can be expressed in y-units, which is the median price of a home in our scenario.
def rmse(predict,actual): return np.sqrt(np.mean(np.square(predict - actual))) # Split the Data into train and test set X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.4) #Create and Train the model model=LinearRegression().fit(X_train,y_train) #Generate prediction predictions_test=model.predict(X_test) #Compute loss to evaluate the model coefficient= model.coef_ intercept=model.intercept_ print(coefficient,intercept) loss=rmse(predictions_test,y_test) print('loss: ',loss) print(model.score(X_test,y_test)) #accuracy [7.43327725] 24.912055881970886 loss: 3.9673165450580714 0.7552661033654667 Loss will be 3.96
This means that y-units refer to the median value of occupied homes with 1000 dollars.
This will be less by 3960 dollars.
While learning the model you will have a high variance when you divide the data. Coefficient and intercept will vary. It's because when we utilized the train/test approach, we choose a set of data at random to place in either the train or test set. As a result, our theory will change each time the dataset is divided.
This problem can be solved using a technique called cross-validation.
With 'Forward Selection,' we'll iterate through each parameter to assist us choose the numbers characteristics to include in our model.
We'll use a random state of 1 so that each iteration yields the same outcome.
cols=[] los=[] los_train=[] scor=[] i=0 while i < len(high_corr_var): cols.append(high_corr_var[i]) # Select inputs variables X=new_df[cols] #mean normalization mu,std,sta=standard(X) X=sta # Split the data into training and testing X_train,X_test,y_train,y_test= train_test_split(X,y,random_state=1) #fit the model to the training lnreg=LinearRegression().fit(X_train,y_train) #make prediction on the training test prediction_train=lnreg.predict(X_train) #make prediction on the testing test prediction=lnreg.predict(X_test) #compute the loss on train test loss=rmse(prediction,y_test) loss_train=rmse(prediction_train,y_train) los_train.append(loss_train) los.append(loss) #compute the score score=lnreg.score(X_test,y_test) scor.append(score) i+=1
We have a big 'loss' with a smaller collection of variables, yet our system will overgeneralize in this scenario. Although we have a reduced 'loss,' we have a large number of variables. However, if the model grows too precise, it may not generalize well to new data.
In order for our model to generalize well with another set of data, we might use 6 or 7 features. The characteristic chosen is descending based on how strong the price correlation is.
high_corr_var ['RM', 'ZN', 'B', 'CHAS', 'RAD', 'DIS', 'CRIM', 'NOX', 'AGE', 'TAX', 'INDUS', 'PTRATIO', 'LSTAT']
With 'RM' having a high price correlation and LSTAT having a negative price correlation.
# Create a list of features names feature_cols=['RM','ZN','B','CHAS','RAD','CRIM','DIS','NOX'] #Select inputs variables X=new_df[feature_cols] # Split the data into training and testing sets X_train,X_test,y_train,y_test= train_test_split(X,y, random_state=1) # feature engineering mu,std,sta=standard(X) X=sta # fit the model to the trainning data lnreg=LinearRegression().fit(X_train,y_train) # make prediction on the testing test prediction=lnreg.predict(X_test) # compute the loss loss=rmse(prediction,y_test) print('loss: ',loss) lnreg.score(X_test,y_test) loss: 3.212659865936143 0.8582338376696363
The test set yielded a loss of 3.21 and an accuracy of 85%.
Other factors, such as alpha, the learning rate at which our model learns, could still be tweaked to improve our model. Alternatively, return to the preprocessing section and working to increase the parameter distribution.
For more details regarding scraping real estate data you can contact Scraping Intelligence today
https://www.websitescraper.com/how-to-predict-housing-prices-with-linear-regression.php
1602964260
Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.
According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.
(State of Kubernetes and Container Security, 2020)
And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.
(State of Kubernetes and Container Security, 2020)
#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml
1598776206
The beginning of the medieval period is ordinarily taken to be the moderate breakdown of the Gupta Empire from around 480 to 550, closure the “old style” period, just as “antiquated India”, albeit both these terms might be utilized for periods with broadly various dates, particularly in specific fields, for example, the historical backdrop of workmanship or religion. At any rate in northern India, there was no bigger state until maybe the Delhi Sultanate, or unquestionably the Mughal Empire. By 1413, the Tughlaq dynasty completely declined and the neighboring governor captured Delhi and this led to the start of the Sayyid Dynasty. In 1398, Timur attacked India and ransacked Indian riches. While returning back, he named Khizr Khan as the legislative head of Delhi.
#indian history tutorials #history mcqs #indian history mcqs #indian history quiz