Designing for Appropriate Interaction

Designing for Appropriate Interaction

<strong>Originally published by </strong><a href="" target="_blank">Sarah Mautsch</a><strong> </strong><em>at&nbsp;</em><a href="" target="_blank"></a>

What if we designed machines to be more considerate social actors?

Our devices are socially awkward by design. Still, little has had a bigger effect on society over the last decade than smartphones and the second-order effects they caused. Over the next decade, as technology will become even more intimate, and at the same time more situationally aware through advances in sensors and artificial intelligence, we as designers and technologists have a unique opportunity to shape technology’s role as a social actor with intent.

Dieter Rams is often quoted as saying that he strives to design products to behave like ‘a good english butler’. This is a good metaphor for Rams’ fifth principle of ‘good design being unobtrusive’. While in the world of physical products this was clearly distinguishable from his second principle of ‘good design making a product useful’ — as products become more connected and at the same time more ingrained into our lives they could easily be rendered unusable if they are not fundamentally designed for unobtrusiveness.

One can already feel that in today’s smartphone usage and how overwhelmed people are with the amount of notifications they receive. Now just imagine a set of augmented reality goggles that do not know when it is appropriate to surface information and when they have to clear the field of view.

In a bigger sense, the idea of a product being an ‘english butler’ and the requirement of products being ‘there for you when you need them, but in the background at all times’ in the physical world has typically been translated mostly into aesthetic qualities. Digital products though, have not been able to deliver on this expectation for a long time. Limited sensorial capabilities, but even more importantly shortcomings in battery life and computational capacity, led to a ‘pull’-based model in interaction design that mostly required the user to manually inquire for an action to be taken. With the inception of the smartphone, and the emergence of ubiquitous connectivity, the system of ‘push-notifications’ became an ingrained part of our daily lives and with them the constant struggle to keep up with this new ‘push’-based approach to interaction. While the personified digital butler (software at large) for a long time could never anticipate our needs and always had to be asked to perform an action for us, it has recently pivoted to constantly demanding our attention and imposing any trivial distraction upon us, while not only serving their employer (‘us’ — the user) but also working in the interest of any company whose app found its way on our home-screens somehow.

In the paper ‘Magic Ink’ from 2006, the year before the smartphone revolution was kicked off by the launch of the iPhone, Bret Victor already concluded that when it comes to human interface design for information software, ‘interaction’ is to be ‘considered harmful’ — or in a more ‘Ramsian’ way of phrasing it — software designers should strive for ‘as little interaction as possible’.

Victor proposes a strategy to reduce the need for interaction with a system by suggesting ‘that the design of information software should be approached initially and primarily as a graphic design project’ treating any interface-design project as an exercise in graphic design first, only using manipulation as a last resort. While this is an effective approach for ‘pull’-based information software, it is not sufficient for communication software (which Victor describes as ‘manipulation software and information software glued together’) which today forces information upon us— mostly without much of a graphic interface at all¹.

Another strategy is to infer context through environment, history and user interaction. Victor writes, that software can

1. infer the context in which its data is needed,> 2. winnow the data to exclude the irrelevant and> 3. generate a graphic which directly addresses the present needs
12 years later, as technology and especially the integration of hardware and software significantly progressed — its ability to infer context and winnow data became much greater and can therefore be used to generate ‘graphics’ (or any other output) to address the present user needs even better, but even more importantly at a time, space and manner that is appropriate for the user and their environment.

But what could appropriateness mean for (inter-)action? Merriam-Webster defines appropriateness as ‘the quality or state of being especially suitable or fitting’. Here, I am going to deem any action or interaction to be appropriate, if it cannot be avoided through automation or anticipation and fits a given society’s model of ethics². This changes vastly between different societies, but is also dependent on one’s very personal relationship with a given group of people — be it a whole culture or a group of friends.

With the widespread adoption of mobile phones and the advent of ringtones, movie theatres decided to come up with a contract of what is appropriate usage of this technology in the environment they provide. To this day every movie starts with an announcement asking people to put their phones into ‘silent mode’. Similar measures have been taken on public transportation. The emergence of the smartphone even led to a cottage industry of devices trying to make it harder for people to use their smartphones and be distracted during shows.

Modelling appropriateness computationally, though, is no easy task. Attempts at this problem in our current devices are all very tactical. The whole area of human-computer-etiquette is still being neglected in most software today — sometimes it is only considered because of legal requirements.

But even more general approaches to modelling appropriateness, like the ‘Do not Disturb’ feature in Apple iOS, require the user to be very deliberate in their setup, and lack smartness and fluidity, since they are all modelled after the design-blueprint of their ancestor, ‘airplane mode’.

The exploration of machines as social actors in interaction design or human-computer interaction research is not that new. Advancements in sensors and computation, combined with more capable machine learning naturally lead towards a movement to make computers more situationally or socially aware.

This piece is based on a lot of the great work being done in the fields of affective computing at large, and human-computer etiquette as well as CASA(computers as a social actor) more specifically.

I started my design process by exploring the future of work. The office has always been fertile ground in interaction design. In the ‘mother of all demos’Douglas Engelbart starts by explaining what the audience is about to see by talking about a scenario that takes place in the office of a knowledge- (or back then, “intellectual”) worker:

“… if in your office, you, as an intellectual worker, were supplied with a computer display backed up by a computer that was alive for you all day and was instantly (…) responsive to every action you have, how much value could you derive from that?”
The office has changed a lot since then, to a great extent thanks to Engelbart’s and Licklider’s contributions through SRI. A lot of people do not work in an office at all anymore — and all the technology that eventually turned into tools for thought and bicycles for the mind and truly helped augment our mind’s capability, productivity and creativity ended up at the same time minimising our space for thought — the capacity of our mind.

In a human-centred design process I started off by interviewing people around what today’s knowledge work looks like to then ideate concepts that go beyond the knowledge worker’s office, be it a cubicle, open or everywhere.

The framework that eventually emerged, which built the foundation — the “API”³ — for its individual applications builds on a system of sensors and intelligence with a capability to infer context and the winnowing of data to build a model of minimal interaction that occurs only when appropriate — based on the user’s environment, state of mind and a model of ethics, which is trained over time based on feedback on previous interactions between the system with the user in a given context.

Conceptual ‘API’ graph for appropriate machine behaviour

Now we will apply this system to scenarios that emerged from research that currently encompass ‘inappropriate’ behaviour and try to envision the machines involved as more considerate social actors.

Every prototype is accompanied by a broad ‘how might we?’ question that is intended to act as a loose principle, so that other designers and technologists can ask themselves the same questions when they design their product or service — looking at the prototype as one of many possible implementation strategies.

1 → Human Machine Etiquette

How might we design machines that can act appropriately towards their user?

What if a navigation system could be respectful of a conversation?

Imagine, driving your car. Having an in-depth conversation with your co-driver. Your co-driver listens to you when you’re talking and vice versa. It is an engaging discussion and everyone who understands natural language can tell.

Prototype → a simple iOS app that doesn’t interrupt the user when it hears them speak, but uses short audio signals to make itself heard.

So does the navigation system, that uses speech recognition and natural language processing to inform its ‘ethics model’. Therefore it only speaks out the next direction when there is a natural break in the conversation. When the information gets more urgent, it will make use of more intrusive signals. Only when the urgency has reached a maximum, the system will resort to such an intrusive signal as speaking out loud. With the help of eye-tracking and other sensing devices, it could even tell if you’ve already recognised the instruction.

The same system, using the ‘API’, could even be integrated with apps like Apple Music or Spotify perfectly timing interruptions for when there’s a natural break in the conversation in your favourite podcast or just after your favourite part in the current song you’re listening to.

2 → Negotiable Environment

How might we design machines that are aware and attentive of their environment?

In general, ‘negotiable environments’ are those in which interactions between a machine and its owner that have an impact on the people sorrounding them. Often this is relevant in public spaces, for example on the train or in a park, but even more importantly in a more intimate space, that is shared by the same people, like in an open office, a cinema or even an airplane.

What if machines could behave politely towards everyone they affect?

Prototype → A new take on the brightness slider that contains a subtle gradient to visualize how others set brightness around you to nudge you to set it similarly.

There are only a few people left using their devices. As the first ones turn down the brightness of their laptops to get into sleep mode the average brightness of devices that are in the vicinity starts to drop and the other devices, which haven’t had their brightness adjusted, automatically adapt accordingly. If their users don’t reset it (and therefore apply negative reinforcement) their respective ‘ethics models’ get trained on this behaviour as being appropriate and accepted by their user.

3 → Conformity

How might we design machines that adopt the appropriate behaviour in new and unknown environments and situations?

The last story brings up an interesting question: Should the settings automatically adapt to the environment or not? When people come to an unfamiliar place they look at how others behave, to figure out ‘how this place works’. Only when they are confident in the environment, they start to make adjustments or feel confident to behave outside the rules.

What if machines could understand and adapt dynamically to different customs and new environments?

Prototype → A settings app in which the settings change based on the user’s location or context (represented through proximity to different iBeacons).

Imagine a family in Japan enjoying a meal at a local restaurant. A business man (it seems to be his first time in Japan) is loudly recording voice messages to send to a colleague as he enters the restaurant. His smartphone recognises the new environment, looks at the other phones (which are all switched to ‘Do not Disturb’) and copies their setting since he has set his level of ‘conformity’ to ‘high’ for environments he is unfamiliar with. He will receive his colleagues reply once he leaves the restaurant and can therefore fully enjoy his meal.

4 → Augmentation

How might we leverage machines to become better social actors when we communicate through digital means?

Now that we’ve looked at multiple examples about how the system could make machines better social actors we might ask ourselves how this system could help us become even better social actors — especially in digital communication, where even humans often lack the usual social cues we use to figure out if an interruption will be welcome or disruptive.

What if we always knew if it’s okay to ask our colleague a ‘quick question’?

Prototype → iOS App that uses Face ID eye-tracking and networks with another iPhone running the same app detecting when both people pay attention (or not) and then signaling between them.

Imagine being in an open plan office. It is 10am, usually the busiest hour of the day. Mark has a pressing question, but his colleague Laura seems busy. Mark knows that Laura has a deadline to meet sometime today. He is not sure whether to ask the colleague right away, or to wait until lunch or even until late afternoon just to be save.

Mark uses the ‘My Colleagues’ app that Laura uses to signal whether she is deeply concentrated or happy for a little distraction. The app uses eye-tracking and other parameters to tell the difference. Laura indeed seems to be concentrated, so Mark can request a notification once she isn’t concentrated anymore and happy to chat. When the system detects that Laura might need a little break it asks her if she wants to chat to Mark. Laura agrees that she would appreciate a little break and accepts the invite. Mark gets a notification and they have a little chat.

A significant element of this system is its ability to learn from scratch. Every system equipped with this framework, could start out with a very basic ‘ethics model’ — a ‘tabula rasa’. As it learns from its environment and primarily its user it will develop a unique perspective on what is appropriate behaviour based on its experience — the feedback it got based on actions it carried out in the real world. Therefore a system that was ‘socialised’ by say a fashion blogger in France will have a slightly different idea of what it deems appropriate to a system that was trained through usage by an elderly woman from Japan, who just got her first smartphone. Only once those devices are in the same vicinity they will start to influence each other, and if the blogger ever travels to Japan they might even change their device’s ‘conformity settings’ to a higher level out of respect towards the culture that is unfamiliar to them, both person-to-person and device-to-device.

This ‘computational take at situational ethics’ is an approach that is very different from the more ‘biblical law’-type approaches technology companies take today when making decisions on what is right and wrong. A lot of the internet usage today is controlled by Facebook, which has a very stereotypically Silicon Valley approach when it comes to what content is allowed on the platform or not. Engineers, designers and product managers in Silicon Valley define what is moral for the whole rest of the world using their product. Once Facebook (or its subsidiaries) keep growing into new markets and beyond its 2 billion users — or networks from other cultures gain a dominant position in the world of technology— tensions, which are already being built up today, are just going to grow even more and the development of a more scalable, dynamic and inclusive solution has to start.

I would love to hear your thoughts on how we might make machines more considerate social actors. Let me know here or on Twitter @sarahmautsch.


[1] Just think of notifications (the design of which is mostly prescribed by the creator of the operating system) or voice UIs like Siri or Alexa.

[2] In this piece the word ethics is used in its narrow meaning, based on its literal original meaning, from the greek ethos /έθος/, defined in the oxford dictionary as ‘the characteristic spirit of a culture, era, or community as manifested in its attitudes and aspirations’. When taking the context into account to evaluate it ethically it is called situational ethics. On the contrary, a biblical law type approach judges based on absolute moral doctrines. As this project is concerned with making machines behave appropriate based on their learnings from (inter-)actions within the environment, this piece fosters the idea of situational ethics.

[3] An API stands for Application Programming Interface and is ‘a set of rules that allows programmers to develop software for a particular operating system without having to be completely familiar with that operating system’ — here we will consider it as a concept of the system (with all of its inputs, outputs and core functions) on the basis which we can start to design.

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn Data Science | How to Learn Data Science for Free

Learn Data Science | How to Learn Data Science for Free

Learn Data Science | How to Learn Data Science for Free. In this post, I have described a learning path and free online courses and tutorials that will enable you to learn data science for free.

The average cost of obtaining a masters degree at traditional bricks and mortar institutions will set you back anywhere between $30,000 and $120,000. Even online data science degree programs don’t come cheap costing a minimum of $9,000. So what do you do if you want to learn data science but can’t afford to pay this?

I trained into a career as a data scientist without taking any formal education in the subject. In this article, I am going to share with you my own personal curriculum for learning data science if you can’t or don’t want to pay thousands of dollars for more formal study.

The curriculum will consist of 3 main parts, technical skills, theory and practical experience. I will include links to free resources for every element of the learning path and will also be including some links to additional ‘low cost’ options. So if you want to spend a little money to accelerate your learning you can add these resources to the curriculum. I will include the estimated costs for each of these.

Technical skills

The first part of the curriculum will focus on technical skills. I recommend learning these first so that you can take a practical first approach rather than say learning the mathematical theory first. Python is by far the most widely used programming language used for data science. In the Kaggle Machine Learning and Data Science survey carried out in 2018 83% of respondents said that they used Python on a daily basis. I would, therefore, recommend focusing on this language but also spending a little time on other languages such as R.

Python Fundamentals

Before you can start to use Python for data science you need a basic grasp of the fundamentals behind the language. So you will want to take a Python introductory course. There are lots of free ones out there but I like the Codeacademy ones best as they include hands-on in-browser coding throughout.

I would suggest taking the introductory course to learn Python. This covers basic syntax, functions, control flow, loops, modules and classes.

Data analysis with python

Next, you will want to get a good understanding of using Python for data analysis. There are a number of good resources for this.

To start with I suggest taking at least the free parts of the data analyst learning path on Dataquest offers complete learning paths for data analyst, data scientist and data engineer. Quite a lot of the content, particularly on the data analyst path is available for free. If you do have some money to put towards learning then I strongly suggest putting it towards paying for a few months of the premium subscription. I took this course and it provided a fantastic grounding in the fundamentals of data science. It took me 6 months to complete the data scientist path. The price varies from $24.50 to $49 per month depending on whether you pay annually or not. It is better value to purchase the annual subscription if you can afford it.

The Dataquest platform

Python for machine learning

If you have chosen to pay for the full data science course on Dataquest then you will have a good grasp of the fundamentals of machine learning with Python. If not then there are plenty of other free resources. I would focus to start with on scikit-learn which is by far the most commonly used Python library for machine learning.

When I was learning I was lucky enough to attend a two-day workshop run by Andreas Mueller one of the core developers of scikit-learn. He has however published all the material from this course, and others, on this Github repo. These consist of slides, course notes and notebooks that you can work through. I would definitely recommend working through this material.

Then I would suggest taking some of the tutorials in the scikit-learn documentation. After that, I would suggest building some practical machine learning applications and learning the theory behind how the models work — which I will cover a bit later on.


SQL is a vital skill to learn if you want to become a data scientist as one of the fundamental processes in data modelling is extracting data in the first place. This will more often than not involve running SQL queries against a database. Again if you haven’t opted to take the full Dataquest course then here are a few free resources to learn this skill.

Codeacamdemy has a free introduction to SQL course. Again this is very practical with in-browser coding all the way through. If you also want to learn about cloud-based database querying then Google Cloud BigQuery is very accessible. There is a free tier so you can try queries for free, an extensive range of public datasets to try and very good documentation.

Codeacademy SQL course


To be a well-rounded data scientist it is a good idea to diversify a little from just Python. I would, therefore, suggest also taking an introductory course in R. Codeacademy have an introductory course on their free plan. It is probably worth noting here that similar to Dataquest Codeacademy also offers a complete data science learning plan as part of their pro account (this costs from $31.99 to $15.99 per month depending on how many months you pay for up front). I personally found the Dataquest course to be much more comprehensive but this may work out a little cheaper if you are looking to follow a learning path on a single platform.

Software engineering

It is a good idea to get a grasp of software engineering skills and best practices. This will help your code to be more readable and extensible both for yourself and others. Additionally, when you start to put models into production you will need to be able to write good quality well-tested code and work with tools like version control.

There are two great free resources for this. Python like you mean it covers things like the PEP8 style guide, documentation and also covers object-oriented programming really well.

The scikit-learn contribution guidelines, although written to facilitate contributions to the library, actually cover the best practices really well. This covers topics such as Github, unit testing and debugging and is all written in the context of a data science application.

Deep learning

For a comprehensive introduction to deep learning, I don’t think that you can get any better than the totally free and totally ad-free This course includes an introduction to machine learning, practical deep learning, computational linear algebra and a code-first introduction to natural language processing. All their courses have a practical first approach and I highly recommend them. platform


Whilst you are learning the technical elements of the curriculum you will encounter some of the theory behind the code you are implementing. I recommend that you learn the theoretical elements alongside the practical. The way that I do this is that I learn the code to be able to implement a technique, let’s take KMeans as an example, once I have something working I will then look deeper into concepts such as inertia. Again the scikit-learn documentation contains all the mathematical concepts behind the algorithms.

In this section, I will introduce the key foundational elements of theory that you should learn alongside the more practical elements.

The khan academy covers almost all the concepts I have listed below for free. You can tailor the subjects you would like to study when you sign up and you then have a nice tailored curriculum for this part of the learning path. Checking all of the boxes below will give you an overview of most elements I have listed below.



Calculus is defined by Wikipedia as “the mathematical study of continuous change.” In other words calculus can find patterns between functions, for example, in the case of derivatives, it can help you to understand how a function changes over time.

Many machine learning algorithms utilise calculus to optimise the performance of models. If you have studied even a little machine learning you will probably have heard of Gradient descent. This functions by iteratively adjusting the parameter values of a model to find the optimum values to minimise the cost function. Gradient descent is a good example of how calculus is used in machine learning.

What you need to know:


  • Geometric definition
  • Calculating the derivative of a function
  • Nonlinear functions

Chain rule

  • Composite functions
  • Composite function derivatives
  • Multiple functions


  • Partial derivatives
  • Directional derivatives
  • Integrals

Linear Algebra

Many popular machine learning methods, including XGBOOST, use matrices to store inputs and process data. Matrices alongside vector spaces and linear equations form the mathematical branch known as Linear Algebra. In order to understand how many machine learning methods work it is essential to get a good understanding of this field.

What you need to learn:

Vectors and spaces

  • Vectors
  • Linear combinations
  • Linear dependence and independence
  • Vector dot and cross products

Matrix transformations

  • Functions and linear transformations
  • Matrix multiplication
  • Inverse functions
  • Transpose of a matrix


Here is a list of the key concepts you need to know:

Descriptive/Summary statistics

  • How to summarise a sample of data
  • Different types of distributions
  • Skewness, kurtosis, central tendency (e.g. mean, median, mode)
  • Measures of dependence, and relationships between variables such as correlation and covariance

Experiment design

  • Hypothesis testing
  • Sampling
  • Significance tests
  • Randomness
  • Probability
  • Confidence intervals and two-sample inference

Machine learning

  • Inference about slope
  • Linear and non-linear regression
  • Classification

Practical experience

The third section of the curriculum is all about practice. In order to truly master the concepts above you will need to use the skills in some projects that ideally closely resemble a real-world application. By doing this you will encounter problems to work through such as missing and erroneous data and develop a deep level of expertise in the subject. In this last section, I will list some good places you can get this practical experience from for free.

“With deliberate practice, however, the goal is not just to reach your potential but to build it, to make things possible that were not possible before. This requires challenging homeostasis — getting out of your comfort zone — and forcing your brain or your body to adapt.”, Anders Ericsson, Peak: Secrets from the New Science of Expertise

Kaggle, et al

Machine learning competitions are a good place to get practice with building machine learning models. They give access to a wide range of data sets, each with a specific problem to solve and have a leaderboard. The leaderboard is a good way to benchmark how good your knowledge at developing a good model actually is and where you may need to improve further.

In addition to Kaggle, there are other platforms for machine learning competitions including Analytics Vidhya and DrivenData.

Driven data competitions page

UCI Machine Learning Repository

The UCI machine learning repository is a large source of publically available data sets. You can use these data sets to put together your own data projects this could include data analysis and machine learning models, you could even try building a deployed model with a web front end. It is a good idea to store your projects somewhere publically such as Github as this can create a portfolio showcasing your skills to use for future job applications.

UCI repository

Contributions to open source

One other option to consider is contributing to open source projects. There are many Python libraries that rely on the community to maintain them and there are often hackathons held at meetups and conferences where even beginners can join in. Attending one of these events would certainly give you some practical experience and an environment where you can learn from others whilst giving something back at the same time. Numfocus is a good example of a project like this.

In this post, I have described a learning path and free online courses and tutorials that will enable you to learn data science for free. Showcasing what you are able to do in the form of a portfolio is a great tool for future job applications in lieu of formal qualifications and certificates. I really believe that education should be accessible to everyone and, certainly, for data science at least, the internet provides that opportunity. In addition to the resources listed here, I have previously published a recommended reading list for learning data science available here. These are also all freely available online and are a great way to complement the more practical resources covered above.

Thanks for reading!

Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data

Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data

Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data

Downloadable PDF of Best AI Cheat Sheets in Super High Definition

Let’s begin.

Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Data Science in HD

Part 1: Neural Networks Cheat Sheets

Neural Networks Cheat Sheets

Neural Networks Basics

Neural Networks Basics Cheat Sheet

An Artificial Neuron Network (ANN), popularly known as Neural Network is a computational model based on the structure and functions of biological neural networks. It is like an artificial human nervous system for receiving, processing, and transmitting information in terms of Computer Science.

Basically, there are 3 different layers in a neural network :

  1. Input Layer (All the inputs are fed in the model through this layer)
  2. Hidden Layers (There can be more than one hidden layers which are used for processing the inputs received from the input layers)
  3. Output Layer (The data after processing is made available at the output layer)

Neural Networks Graphs

Neural Networks Graphs Cheat Sheet

Graph data can be used with a lot of learning tasks contain a lot rich relation data among elements. For example, modeling physics system, predicting protein interface, and classifying diseases require that a model learns from graph inputs. Graph reasoning models can also be used for learning from non-structural data like texts and images and reasoning on extracted structures.

Part 2: Machine Learning Cheat Sheets

Machine Learning Cheat Sheets

>>> If you like these cheat sheets, you can let me know here.<<<

Machine Learning with Emojis

Machine Learning with Emojis Cheat Sheet

Machine Learning: Scikit Learn Cheat Sheet

Scikit Learn Cheat Sheet

Scikit-learn is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines is a simple and efficient tools for data mining and data analysis. It’s built on NumPy, SciPy, and matplotlib an open source, commercially usable — BSD license

Scikit-learn Algorithm Cheat Sheet

Scikit-learn algorithm

This machine learning cheat sheet will help you find the right estimator for the job which is the most difficult part. The flowchart will help you check the documentation and rough guide of each estimator that will help you to know more about the problems and how to solve it.

If you like these cheat sheets, you can let me know here.### Machine Learning: Scikit-Learn Algorythm for Azure Machine Learning Studios

Scikit-Learn Algorithm for Azure Machine Learning Studios Cheat Sheet

Part 3: Data Science with Python

Data Science with Python Cheat Sheets

Data Science: TensorFlow Cheat Sheet

TensorFlow Cheat Sheet

TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks.

If you like these cheat sheets, you can let me know here.### Data Science: Python Basics Cheat Sheet

Python Basics Cheat Sheet

Python is one of the most popular data science tool due to its low and gradual learning curve and the fact that it is a fully fledged programming language.

Data Science: PySpark RDD Basics Cheat Sheet

PySpark RDD Basics Cheat Sheet

“At a high level, every Spark application consists of a driver program that runs the user’s main function and executes various parallel operations on a cluster. The main abstraction Spark provides is a resilient distributed dataset (RDD), which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel. RDDs are created by starting with a file in the Hadoop file system (or any other Hadoop-supported file system), or an existing Scala collection in the driver program, and transforming it. Users may also ask Spark to persist an RDD in memory, allowing it to be reused efficiently across parallel operations. Finally, RDDs automatically recover from node failures.” via Spark.Aparche.Org

Data Science: NumPy Basics Cheat Sheet

NumPy Basics Cheat Sheet

NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays.

***If you like these cheat sheets, you can let me know ***here.

Data Science: Bokeh Cheat Sheet

Bokeh Cheat Sheet

“Bokeh is an interactive visualization library that targets modern web browsers for presentation. Its goal is to provide elegant, concise construction of versatile graphics, and to extend this capability with high-performance interactivity over very large or streaming datasets. Bokeh can help anyone who would like to quickly and easily create interactive plots, dashboards, and data applications.” from

Data Science: Karas Cheat Sheet

Karas Cheat Sheet

Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible.

Data Science: Padas Basics Cheat Sheet

Padas Basics Cheat Sheet

Pandas is a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series. It is free software released under the three-clause BSD license.

If you like these cheat sheets, you can let me know here.### Pandas Cheat Sheet: Data Wrangling in Python

Pandas Cheat Sheet: Data Wrangling in Python

Data Wrangling

The term “data wrangler” is starting to infiltrate pop culture. In the 2017 movie Kong: Skull Island, one of the characters, played by actor Marc Evan Jackson is introduced as “Steve Woodward, our data wrangler”.

Data Science: Data Wrangling with Pandas Cheat Sheet

Data Wrangling with Pandas Cheat Sheet

“Why Use tidyr & dplyr

  • Although many fundamental data processing functions exist in R, they have been a bit convoluted to date and have lacked consistent coding and the ability to easily flow together → leads to difficult-to-read nested functions and/or choppy code.
  • R Studio is driving a lot of new packages to collate data management tasks and better integrate them with other analysis activities → led by Hadley Wickham & the R Studio teamGarrett Grolemund, Winston Chang, Yihui Xie among others.
  • As a result, a lot of data processing tasks are becoming packaged in more cohesive and consistent ways → leads to:
  • More efficient code
  • Easier to remember syntax
  • Easier to read syntax” via Rstudios

Data Science: Data Wrangling with ddyr and tidyr

Data Wrangling with ddyr and tidyr Cheat Sheet

If you like these cheat sheets, you can let me know here.### Data Science: Scipy Linear Algebra

Scipy Linear Algebra Cheat Sheet

SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.[3]

Data Science: Matplotlib Cheat Sheet

Matplotlib Cheat Sheet

Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented APIfor embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged. SciPy makes use of matplotlib.

Pyplot is a matplotlib module which provides a MATLAB-like interface matplotlib is designed to be as usable as MATLAB, with the ability to use Python, with the advantage that it is free.

Data Science: Data Visualization with ggplot2 Cheat Sheet

Data Visualization with ggplot2 Cheat Sheet

>>> If you like these cheat sheets, you can let me know here. <<<

Data Science: Big-O Cheat Sheet

Big-O Cheat Sheet


Special thanks to DataCamp, Asimov Institute, RStudios and the open source community for their content contributions. You can see originals here:

Big-O Algorithm Cheat Sheet:

Bokeh Cheat Sheet:

Data Science Cheat Sheet:

Data Wrangling Cheat Sheet:

Data Wrangling:

Ggplot Cheat Sheet:

Keras Cheat Sheet:


Machine Learning Cheat Sheet:

Machine Learning Cheat Sheet:

ML Cheat Sheet::

Matplotlib Cheat Sheet:


Neural Networks Cheat Sheet:

Neural Networks Graph Cheat Sheet:

Neural Networks:

Numpy Cheat Sheet:


Pandas Cheat Sheet:


Pandas Cheat Sheet:

Pyspark Cheat Sheet:

Scikit Cheat Sheet:


Scikit-learn Cheat Sheet:

Scipy Cheat Sheet:


TesorFlow Cheat Sheet:

Tensor Flow:

10 Data Science and Machine Learning Courses for Beginners

10 Data Science and Machine Learning Courses for Beginners

Data Science, Machine Learning, Deep Learning, and Artificial intelligence are really hot at this moment and offering a lucrative career to programmers with high pay and exciting work.

Data Science, Machine Learning, Deep Learning, and Artificial intelligence are really hot at this moment and offering a lucrative career to programmers with high pay and exciting work.

It's a great opportunity for programmers who are willing to learn these new skills and upgrade themselves and want to solve some of the most interesting real-world problems.

It's also important from the job perspective because Robots and Bots are getting smarter day by day, thanks to these technologies and most likely will take over some of the jobs which many programmers do today.

Hence, it's important for software engineers and developers to upgrade themselves with these skills. Programmers with these skills are also commanding significantly higher salaries as data science is revolutionizing the world around us.

You might already know that the Machine learning specialist is one of the top paid technical jobs in the world. However, most developers and IT professionals are yet to learn this valuable set of skills.

For those, who don't know what is a Data ScienceMachine learning, or deep learning, they are very related terms with all pointing towards machine doing jobs which is only possible for humans till date and analyzing the huge set of data collected by modern day application.

Data Science, in particular, is a combination of concepts such as machine learning, visualization, data mining, programming, data mugging, etc.

If you have some programming experience then you can learn Python or Rto make your carer as a Data Scientist.

There are a lot of popular scientific Python libraries such as Numpy, Scipy, Scikit-learn, Pandas, which is used by Data Scientist for analyzing data.

To be honest with you, I am also quite new to Data Science and Machine learning world but I have been spending some time from last year to understand this field and have done some research in terms of best resources to learn machine learning, data science, etc.

I am sharing all those resources in a series of a blog post like this. Earlier, I have shared some courses to learn TensorFlow, one of the most popular machine-learning library and today I'll share some more to learn these technologies.

These are a combination of both free and paid resource which will help you to understand key data science concepts and become a Data Scientist. Btw, I'll get paid if you happen to buy a course which is not free.

10 Useful Courses to Learn Machine Learning and Data Science for Programmers

Here is my list of some of the best courses to learn Data Science, Machine learning, and deep learning using Python and R programming language. As I have said, Data Science and machine learning work very closely together, hence some of these courses also cover machine learning.

If you are still on fence with respect to choosing Python or R for machine learning, let me tell you that both Python and R are a great language for Data Analysis and have good APIs and library, hence I have included courses in both Python and R, you can choose the one you like.

I personally like Python because of its versatile usage, it's the next best in my list of language after Java. I am already using it for writing scripts and other web stuff, so it was an easy choice for me. It has also got some excellent libraries like Sci-kit Learn and TensorFlow.

Data Science is also a combination of many skills e.g. visualization, data cleaning, data mining, etc and these courses provide a good overview of all these concepts and also presents a lot of useful tools which can help you in the real world.

Machine Learning by Andrew Ng

This is probably the most popular course to learn machine learning provided by Stanford University and Coursera, which also provides certification. You'll be tested on each and every topic that you learn in this course, and based on the completion and the final score that you get, you'll also be awarded the certificate.

This course is free but you need to pay for certificates, if you want. Though, it does provide value to you as a developer and gives you a good understanding of the mathematics behind all the machine learning algorithms that you come up with.

I personally really like this one. Andrew Ng takes you through the course using Octave, which is a good tool to test your algorithm before making it go live on your project.

1.Machine Learning A-Z: Hands-On Python and R --- In Data Science

This is probably the best hands on course on Data Science and machine learning online. In this course, you will learn to create Machine Learning Algorithms in Python and R from two Data Science experts.

This is a great course for students and programmers who want to make a career in Data Science and also Data Analysts who want to level up in machine learning.

It's also good for any intermediate level programmers who know the basics of machine learning, including the classical algorithms like linear regression or logistic regression, but who want to learn more about it and explore all the different fields of Machine Learning.

2. Data Science with R by Pluralsight

Data science is the practice of transforming data into knowledge, and R is one of the most popular programming language used by data scientists.

In this course, you'll learn first learn about the practice of data science, the R programming language, and how they can be used to transform data into actionable insight.

Next, you'll learn how to transform and clean your data, create and interpret descriptive statistics, data visualizations, and statistical models.

Finally, you'll learn how to handle Big Data, make predictions using machine learning algorithms, and deploy R to production.

Btw, you would need a Pluralsight membership to get access this course, but if you don't have one you can still check out this course by taking their 10-day free Pass, which provides 200 minutes of access to all of their courses for free.

3.** **Harvard Data Science Course

The course is a combination of various data science concepts such as machine learning, visualization, data mining, programming, data mugging, etc.

You will be using popular scientific Python libraries such as Numpy, Scipy, Scikit-learn, Pandas throughout the course.

I suggest you complete the machine learning course on course before taking this course, as machine learning concepts such as PCA (dimensionality reduction), k-means and logistic regression are not covered in depth.

But remember, you have to invest a lot of time to complete this course, especially the homework exercises are very challenging

In short, if you are looking for an online course in data science(using Python), there is no better course than Harvard's CS 109. You need some background in programming and knowledge of statistics to complete this course.

4. Want to be a Data Scientist? (FREE)

This is a great introductory course on what Data Scientist do and how you can become a data science professional. It's also free and you can get it on Udemy.

If you have just heard about Data Science and excited about it but doesn't know what it really means then this is the course you should attend first.

It's a small course but packed with big punches. You will understand what Data Science is? Appreciate the work Data Scientists do on a daily basis and differentiate the various roles in Data Science and the skills needed to perform them.

You will also learn about the challenges Data Scientists face. In short, this course will give you all the knowledge to make a decision on whether Data Science is the right path for you or not.

5. Intro to Data Science by Udacity

This is another good Introductory course on Data science which is available for free on Udacity, another popular online course website.

In this course, you will learn about essential Data science concepts e.g. Data Manipulation, Data Analysis with Statistics and Machine Learning, Data Communication with Information Visualization, and Data at Scale while working with Big Data.

This is a free course and it's also the first step towards a new career with the Data Analyst Nanodegree Program offered by Udacity.

6. Data Science Certification Training --- R Programming

The is another good course to learn Data Science with R. In this course, you will not only learn R programming language but also get some hands-on experience with statistical modeling techniques.

The course has real-world examples of how analytics have been used to significantly improve a business or industry.

If you are interested in learning some practical analytic methods that don't require a ton of maths background to understand, this is the course for you.

7. Intro To Data Science Course by Coursera

This course provides a broad introduction to various concepts of data science. The first programming exercise "Twitter Sentiment Analysis in Python" is both fun and challenging, where you analyze tons of twitter message to find out the sentiments e.g. negative, positive etc.

The course assumes that you know statistics, Python, and SQL.

Btw, It's not so good for beginners, especially if you don't know Python and SQL but if you do and have a basic understanding of Data Science then this is a great course.

8. Python for Data Science and Machine Learning Bootcamp

There is no doubt that Python is probably the best language, apart from R for Data Analysis and that's why it's hugely popular among Data Scientists.

This course will teach you how to use all important Python scientific and machine learning libraries Tensorflow, NumPy, Pandas, Seaborn, Matplotlib, Plotly, Scikit-Learn, Machine Learning, and many more libraries which I have explained earlier in my list of useful machine learning libraries.

It's a very comprehensive course and you will how to use the power of Python to analyze data, create beautiful visualizations, and use powerful machine learning algorithms!

9. Data Science A-Z: Real-Life Data Science Exercises Included

This is another great hands-on course on Data Science from Udemy. It promises to teach you Data Science step by step through real Analytics examples. Data Mining, Modeling, Tableau Visualization and more.

This course will give you so many practical exercises that the real world will seem like a piece of cake when you complete this course.

The homework exercises are also very thought-provoking and challenging. In short, If you love doing stuff then this is a course for you.

10. Data Science, Deep Learning and Machine Learning with Python

If you've got some programming or scripting experience, this course will teach you the techniques used by real data scientists and machine learning practitioners in the tech industry --- and help you to become a data scientist.

The topics in this course come from an analysis of real requirements in data scientist job listings from the biggest tech employers, that makes it even more special and useful.

That's all about some of the popular courses to learn Data Science. As I said, there is a lot of demand for good Data Analytics and there are not many developers out there to fulfill that demand.

It's a great chance for the programmer, especially those who have good knowledge of maths and statistics to make a career in machine learning and Data analytics. You will be awarded exciting work and incredible pay.

Other useful Data Science and Machine Learning resources

Top 8 Python Machine Learning Libraries

5 Free courses to learn R Programming for Machine learning

5 Free courses to learn Python in 2018

Top 5 Data Science and Machine Learning courses

Top 5 TensorFlow and Machine Learning Courses

10 Technologies Programmers Can Learn in 2018

Top 5 Courses to Learn Python Better

How a Japanese cucumber farmer is using deep learning and TensorFlow

Closing Notes

Thanks, You made it to the end of the article ... Good luck with your Data Science and Machine Learning journey! It's certainly not going to be easy, but by following these courses, you are one step closer to becoming the Machine Learning Specialists you always wanted to be.