What NOT to Do in the Data Science Domains Industry

Data science is linked to numerous other modern buzzwords such as big data and machine learning, but data science itself is built from numerous domains, where you can get your expertise. These domains include the following:

  • Statistics
  • Visualization
  • Data mining
  • Machine learning
  • Pattern recognition
  • Data platform operations
  • Artificial intelligence
  • Programming
    Math and statistics
    Statistics and other math skills are essential in several phases of the data science project. Even in the beginning of data exploration, you’ll be dividing the features of your data observations into categories:
  • Categorical
  • Numeric:
  • Discrete
  • Continuous
    Continuous values have an infinite number of possible values and use real numbers for the representation. In a nutshell, discrete variables are like points plotted on a chart, and a continuous variable can be plotted as a line.
    Another classification of the data is the measurement-level point of view. We can split data into two primary categories:
    Qualitative:
  • Nominal
  • Ordinal
  • Quantitative:
  • Interval
  • Ratio
    Nominal variables can’t be ordered and only describe an attribute. An example would be the color of a product; this describes how the product looks, but you can’t put any ordering scheme on the color saying that red is bigger than green, and so on. Ordinal variables describe the feature with a categorical value and provide an ordering system; for example Education—elementary, high school, university degree, and so on.

Visualizing the types of data
Visualizing and communicating data is incredibly important, especially with young companies that are making data-driven decisions for the first time, or companies where data scientists are viewed as people who help others make data-driven decisions. When it comes to communicating, this means describing your findings, or the way techniques work to audiences, both technical and non-technical. Different types of data have different ways of representation. When we talk about the categorical values, the ideal representation visuals would be these:

  • Bar charts
  • Pie charts
  • Pareto diagrams

Frequency distribution tables
A bar chart would visually represent the values stored in the frequency distribution tables. Each bar would represent one categorical value. A bar chart is also a baseline for a Pareto diagram, which includes the relative and cumulative frequency for the categorical values:

Bar chart representing the relative and cumulative frequency for the categorical values
If we’ll add the cumulative frequency to the bar chart, we will have a Pareto diagram of the same data:
This is image title
Pareto diagram representing the relative and cumulative frequency for the categorical values
Another very useful type of visualization for categorical data is the pie chart. Pie charts display the percentage of the total for each categorical value. In statistics, this is called the relative frequency. The relative frequency is the percentage of the total frequency of each category. This type of visual is commonly used for market-share This is image title

*Statistics *
A good understanding of statistics is vital for a data scientist. You should be familiar with statistical tests, distributions, maximum likelihood estimators, and so on. This will also be the case for machine learning, but one of the more important aspects of your statistics knowledge will be understanding when different techniques are (or aren’t) a valid approach. Statistics is important for all types of companies, especially data-driven companies where stakeholders depend on your help to make decisions and design and evaluate experiments.

Machine learning
A very important part of data science is machine learning. Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it.
**
Choosing the right algorithm**
When choosing the algorithm for machine learning, you have to consider numerous factors to properly choose the right algorithm for the task. It should not only be based on the predicted output: category, value, cluster, and so on, but also on numerous other factors, such as these:

  1. Training time
  2. Size of data and number of features you’re processing
  3. Accuracy
  4. Linearity
  5. Number of possible parameters
    Training time can range from minutes to hours, depending not only on the algorithm but also on the number of features entering the model and the total amount of data that is being processed. However, a proper choice of algorithm can make the training time much shorter compared to the other. In general, regression models will reach the fastest training times, whereas neural network models will be on the other side of the training time length spectrum. Remember that developing a machine-learning model is iterative work. You will usually try several models and compare possible metrics. Based on the metrics captured, you’ll fine-tune the models and run comparisons again on selected candidates and choose one model for operations. Even with more experience, you might not choose the right algorithm for your model at first, and you might be surprised that other algorithms can outperform the first chosen candidate, as shown:
    This is image title

Big data
Big data is another modern buzzword that you can find around the data management and analytics platforms. The big does not have to mean that the data volume is extremely large, although it usually is. learn more Data science online course
SQL Server and big data
Let’s face reality. SQL Server is not a big-data system. However, there’s a feature on the SQL Server that allows us to interact with other big-data systems, which are deployed in the enterprise. This is huge!
This allows us to use the traditional relational data on the SQL Server and combine it with the results from the big-data systems directly or even run the queries towards the big-data systems from the SQL Server. The answer to this problem is a technology called PolyBase:
This is image title

#data #datascience #bigdata #machinelearning #statistics

What NOT to Do in the Data Science Domains Industry
2.35 GEEK