Mutation testing in PHP project

Mutation testing in PHP project

Mutation testing. According to wikipedia, mutation testing involves applying small modifications to a software. These modifications are called mutations. Then, if you pass your tests and any of them fails because of this "mutation", then you have "killed the mutant"

Recently I spent an afternoon experimenting with mutation testing in PHP. In this post I would like to share the background, the main idea of mutation testing, and the lessons I’ve learned from it.

Mutation testing

In essence, mutation testing is a way of measuring the quality of a test suite. A tool generates a number of copies (mutants) of your source code under test, but modifies each of them in a small way. These small modifications are basically just errors that you as a programmer could have made, such as replacing a < with a <=. A high-quality testsuite would detect these modifications (kill the mutant) by failing one or more tests. In a suboptimal test suite it might happen that the tests remain green despite the modification to the source code. In such a case we speak about an escaped mutant. These present an opportunity to improve the test suite by adding a test that fails in the presence of the given modification. The Mutation Score Indicator (MSI, the percentage of mutants detected by the test set) provides a metric for the quality of the test suite.

My plan was to apply Infection to one of our internal libraries. I picked this library because of its high (line-based) code coverage, which would imply a high-quality test suite. Therefore I wondered what ‘leaks’ Infection could still find in it. The idea was to convert the escaped mutations found by Infection to new tests, working towards a PR to increase the MSI of the project as the main deliverable. This blog post wasn’t part of the original plan, but arose as an extra way of sharing some of the lessons learned along the way.

Lessons learned

Mutation testing is awesome!

My main takeaway was that mutation testing can really help you to improve your test suite. Even on a project with 96% line coverage, Infection found multiple scenarios that were not actually covered by the test suite.

One simplified example of this is the following. Suppose we have a function to generate a description for the amount of bottles of beer on the wall:

<?php
function describeBottles(int $amount = 99): string {
    return $amount . ' bottles of beer on the wall';
}

We could already have a test for this function like this:

<?php
class DescribeBottlesTest {
    public function testDescribesTheAmountOfBottlesOfBeer() {
        $this->assertSame('42 bottles of beer on the wall', describeBottles(42));
    }
}

At first sight it may seem that this fully tests the function, and indeed the function shows up with all lines covered in a code coverage report. However, our tests do not check the default value for the argument. This means that some behavior of our function, i.e. that by default it describes 99 bottles, is not verified. Infection can uncover this when it produces a mutation like:

12) /tmp/bottles.php:2    [M] DecrementInteger

--- Original
+++ New
@@ @@
<?php

  • function describeBottles(int $amount = 99): string {
  • function describeBottles(int $amount = 98): string {
    return $amount . ' bottles of beer on the wall';
    }

Here the DecrementInteger mutator has decremented an integer literal occurring in the source code, an error we could have made ourselves if we hit the wrong key on our keyboard. This would currently go unnoticed, but we can fix that by adding a test like:

<?php
class DescribeBottlesTest {
public function testDescribes99BottlesByDefault() {
$this->assertSame('99 bottles of beer on the wall', describeBottles());
}
}

Other untested aspects commonly found by Infection were exception messages and some uncovered paths through complex logic. I added tests or assertions for most of these. For the logic, this is also a sign that the (cyclomatic/N-path) complexity is too high. Those pieces of code should be refactored, but I scheduled that for later.

On the first Infection run, without any changes to the test suite, it produced a 89% MSI. This is already quite good, but with some additions to the test suite I managed to raise the MSI to 93%.

‘False positives’

Getting the MSI much higher proved to be difficult though. Sometimes escaped mutants had changes to parts of the code that are nonessential details. In our view, these details do not constitute relevant behavior and are not part of the ‘contract’ of that unit. Why are they in the code then? Well, sometimes they have to be due to syntactical constraints. Take for example the following piece of code that may throw an exception:

<?php
// ...
try {
$database->executeSql("...");
} catch (DuplicateDatabaseKeyException $e) {
throw new UserAlreadyExistsExeption("User $username already exists", 0, $e);
}

As explained in a previous blog post, we think it is important that exceptions are thrown at the right level of abstraction:

This requires catching and re-throwing exceptions like in the above code snippet. In that post, I also mentioned that we want to maintain the connection with the root cause by setting the $previous-parameter of the new exception. Due to PHP's syntax this requires one to also provide the $code-parameter, which we do not really use. We usually set it to 0 (the default), but honestly couldn't care less about its value.

Now the same DecrementInteger mutator could come along and produce this mutation:

53) /tmp/exception.php:289   [M] DecrementInteger

--- Original
+++ New
@@ @@
} catch (DuplicateDatabaseKeyException $e) {

  • throw new UserAlreadyExistsExeption("User $username already exists", 0, $e); 
    
  • throw new UserAlreadyExistsExeption("User $username already exists", -1, $e); 
    
    }

This mutation will not get caught by our test suite (because we do not assert the code of produced exceptions), so the mutant will escape. In this case we do not care however. We don’t expect our test suite to detect this, as (to us) the mutated code is just as fine as the original. We could add assertions for the exception code, but that would just be extra work without yielding extra value.

I think this is something to be aware of when doing mutation testing. Not all escaped mutants are necessarily bad. For each of them you have to ask yourself the question “Would it be bad if I made this ‘error’ in my code?”. If the answer is no, don’t bother about the escaped mutant.

What test to add?

One of the main difficulties when trying to kill a mutant was figuring out what kind of test to add. Just from seeing a changed line in the code it is not always clear how to write a test that would fail on the given line. This was further amplified by the fact that the codebase I worked with was written by a colleague, and I did not know it inside out yet.

I learned that the HTML code coverage report generated by PHPUnit can be of tremendous help here. If you hover over a covered line of code there, it shows you which tests cover that line. This way you can lookup which tests already exercise the mutated line of code. The test you want to add to kill the mutant is probably a variation of one of them. This reduces your problem to analyzing these ‘example’ tests and reasoning about what you could change in them to fail when the mutation is present.

It improves your code too

Not only the test suite got some updates during my experiments with mutation testing; the production code improved as well. Sometimes the mutants generated by Infection were actually better than the original version! One such case looked somewhat like this:

<?php
class SomeStore {
public static function createWithInMemoryDatabase(): self {
$database = $database ?? new SqliteDatabase(':memory:');
// ...
}
}

Among the generated mutations there was one that removed the $database ?? null coalesce operator. This is actually an improvement, as the null coalesce operator is useless here! At the start of the function, $database is always null, so the operator always resolves to its right-hand side, creating a new database. This code was an artifact from a moment when the method was named differently and allowed injecting a custom database through a $database parameter. Now that parameter has been removed, we can get rid of the null coalesce as well. While other static analysis tools could have found this dead code as well, at least Infection brought it to our attention.

Another example where the mutant turned out to be better was the removal of a trim() function. At that spot in the code there could never be any significant or problematic whitespace. The trim()-call thus was unnecessary and could be removed.

Conclusion

Once you’ve had some practice with mutation testing, it can really help with improving both your test suite and code. Infection is straightforward to setup and use, and makes it fairly simple to get started in PHP. Just keep in mind that not all escaped mutants are a problem and blindly striving for 100% MSI does not add value. Try it out, and let me know what your experiences are!

Thanks For Visiting, Keep Visiting. If you liked this post, share it with all of your programming buddies!

Further reading

Top 10 Testing Frameworks and Libraries for Java Developers

Unit vs E2E Testing for Vue.js

Testing Node.js with Mocha

React Native Web Testing

Testing Vue with Jest

Testing Flutter Apps - Making Sure Your Code Works

Testing your JavaScript Code — TDD

JavaScript Testing using Selenium WebDriver, Mocha and NodeJS


Originally published on https://www.moxio.com/blog/39/mutation-testing-in-php

Learn Software Testing Course in Delhi - APTRON Solutions

Many institutes are having a Software Testing Training And Placement In Delhi but few of them are very great at teaching. In the event that you want to learn about software testing. We have designed this software testing training course to learn...

Many institutes are having a Software Testing Training And Placement In Delhi but few of them are very great at teaching. In the event that you want to learn about software testing. We have designed this software testing training course to learn software testing fundamentals and gently to introduce you to advanced software testing techniques. This course is designed and taught by the working testing professionals having experience. In APTRON Solutions we provide the most practical and software testing job oriented training in Delhi. There are many reasons why software testing has gained such a great amount of importance in the IT field. Firstly, software testing helps in reducing the overall cost and time of the software development project. On the off chance that testing is ignored in the initial development phases to save money, then it might turn out to be a very expensive matter later. Because when proceeding onward with the development process it becomes more difficult to trace back defects and rectifying one defect somewhere can introduce another defect in some other module.

Software testing is a sensible activity which identifies defects in the software bugs or loopholes and helps in correcting and preventing those bug and loopholes before the software get released to the end-user. In this universe of increasing addictiveness on Software, Improper performing of software can lead to serious situations, for example, injuries or might be death (airplane software failure might lead to fatalities), loss of time, loss of money and etc. Software testing field has become one of the fastest-developing industries of corporate IT expenditure. In Delhi are there were lots of opportunities for software testing. As indicated by Pierre Audoin Consultants Testing has become one of the fastest-developing segment of corporate IT sector and worldwide it is spending on testing will reach approximately €85bn in 2011, and will nearly hit the €300bn mark by 2017 meaning that is enormous growth in opportunities for Software Testers.

Software testing Institute in Delhi with 100% Job Guarantee

On the off chance that you are searching for the best Software Testing Institute in Delhi, then your search is officially over. APTRON Solutions offers you with one of the most in-depth training courses in the field of software testing that will absolutely ensure a high paying role in the tech industry. Due to over reliance on software services, training and testing these services have become one of the essential requirements of the industry. With the help of our courses you can ensure that these roles are acutely served to the best of your capabilities. Contact our institute today to enroll yourself into our software testing training module and get yourself a high paying job in one of the most quickly increasing industries. On the off chance that you have a basic idea of software and how they work we will ensure that the knowledge hole is bridged with customized training modules, and we of course provide lessons from scratch. Our institute likewise provide you access to add-on courses, for example, soft skills, programming, project work, attitude, etc. APTRON Solutions means to provide an inside and out training which won't just help you discover a career in an esteemed company, but additionally help you present the best type of yourself.

The best machine learning and deep learning libraries

The best machine learning and deep learning libraries

You are asking Why TensorFlow, Spark MLlib, Scikit-learn, PyTorch, MXNet, and Keras shine for building and training machine learning and deep learning models.If you’re starting a new machine learning or deep learning project, you may be confused about which framework to choose...

You are asking Why TensorFlow, Spark MLlib, Scikit-learn, PyTorch, MXNet, and Keras shine for building and training machine learning and deep learning models.If you’re starting a new machine learning or deep learning project, you may be confused about which framework to choose...

There is a difference between a machine learning framework and a deep learning framework. Essentially, a machine learning framework covers a variety of learning methods for classification, regression, clustering, anomaly detection, and data preparation, and may or may not include neural network methods.

A deep learning or deep neural network framework covers a variety of neural network topologies with many hidden layers. Keras, MXNet, PyTorch, and TensorFlow are deep learning frameworks. Scikit-learn and Spark MLlib are machine learning frameworks. (Click any of the previous links to read my stand-alone review of the product.)

In general, deep neural network computations run much faster on a GPU (specifically an Nvidia CUDA general-purpose GPU), TPU, or FPGA, rather than on a CPU. In general, simpler machine learning methods don’t benefit from a GPU.

While you can train deep neural networks on one or more CPUs, the training tends to be slow, and by slow I’m not talking about seconds or minutes. The more neurons and layers that need to be trained, and the more data available for training, the longer it takes. When the Google Brain team trained its language translation models for the new version of Google Translate in 2016, they ran their training sessions for a week at a time, on multiple GPUs. Without GPUs, each model training experiment would have taken months.

Since then, the Intel Math Kernel Library (MKL) has made it possible to train some neural networks on CPUs in a reasonable amount of time. Meanwhile GPUs, TPUs, and FPGAs have gotten even faster.

The training speed of all of the deep learning packages running on the same GPUs is nearly identical. That’s because the training inner loops spend most of their time in the Nvidia CuDNN package.

Apart from training speed, each of the deep learning libraries has its own set of pros and cons, and the same is true of Scikit-learn and Spark MLlib. Let’s dive in.

Keras

Keras is a high-level, front-end specification and implementation for building neural network models that ships with support for three back-end deep learning frameworks: TensorFlow, CNTK, and Theano. Amazon is currently working on developing a MXNet back-end for Keras. It’s also possible to use PlaidML (an independent project) as a back-end for Keras to take advantage of PlaidML’s OpenCL support for all GPUs.

TensorFlow is the default back-end for Keras, and the one recommended for many use cases involving GPU acceleration on Nvidia hardware via CUDA and cuDNN, as well as for TPU acceleration in Google Cloud. TensorFlow also contains an internal tf.keras class, separate from an external Keras installation.

Keras has a high-level environment that makes adding a layer to a neural network as easy as one line of code in its Sequential model, and requires only one function call each for compiling and training a model. Keras lets you work at a lower level if you want, with its Model or functional API.

Keras allows you to drop down even farther, to the Python coding level, by subclassing keras.Model, but prefers the functional API when possible. Keras also has a scikit-learn API, so that you can use the Scikit-learn grid search to perform hyperparameter optimization in Keras models.

Cost: Free open source.

Platform: Linux, MacOS, Windows, or Raspbian; TensorFlow, Theano, or CNTK back-end.

MXNet

MXNet has evolved and improved quite a bit since moving under the Apache Software Foundation umbrella early in 2017. While there has been work on Keras with an MXNet back-end, a different high-level interface has become much more important: Gluon. Prior to the incorporation of Gluon, you could either write easy imperative code or fast symbolic code in MXNet, but not both at once. With Gluon, you can combine the best of both worlds, in a way that competes with both Keras and PyTorch.

The advantages claimed for Gluon include:

  • Simple, easy-to-understand code: Gluon offers a full set of plug-and-play neural network building blocks, including predefined layers, optimizers, and initializers.
  • Flexible, imperative structure: Gluon does not require the neural network model to be rigidly defined, but rather brings the training algorithm and model closer together to provide flexibility in the development process.
  • Dynamic graphs: Gluon enables developers to define neural network models that are dynamic, meaning they can be built on the fly, with any structure, and using any of Python’s native control flow.
  • High performance: Gluon provides all of the above benefits without impacting the training speed that the underlying engine provides.

These four advantages, along with a vastly expanded collection of model examples, bring Gluon/MXNet to rough parity with Keras/TensorFlow and PyTorch for ease of development and training speed. You can see code examples for each these on the main Gluon page and repeated on the overview page for the Gluon API.

The Gluon API includes functionality for neural network layers, recurrent neural networks, loss functions, dataset methods and vision datasets, a model zoo, and a set of contributed experimental neural network methods. You can freely combine Gluon with standard MXNet and NumPy modules, for example module**, **autograd, and ndarray, as well as with Python control flows.

Gluon has a good selection of layers for building models, including basic layers (Dense, Dropout, etc.), convolutional layers, pooling layers, and activation layers. Each of these is a one-line call. These can be used, among other places, inside of network containers such as gluon.nn.Sequential().

Cost: Free open source.

Platform: Linux, MacOS, Windows, Docker, Raspbian, and Nvidia Jetson; Python, R, Scala, Julia, Perl, C++, and Clojure (experimental). MXNet is included in the AWS Deep Learning AMI.

PyTorch

PyTorch builds on the old Torch and the new Caffe2 framework. As you might guess from the name, PyTorch uses Python as its scripting language, and it uses an evolved Torch C/CUDA back-end. The production features of Caffe2 are being incorporated into the PyTorch project.

PyTorch is billed as “Tensors and dynamic neural networks in Python with strong GPU acceleration.” What does that mean?

Tensors are a mathematical construct that is used heavily in physics and engineering. A tensor of rank two is a special kind of matrix; taking the inner product of a vector with the tensor yields another vector with a new magnitude and a new direction. TensorFlow takes its name from the way tensors (of synapse weights) flow around its network model. NumPy also uses tensors, but calls them an ndarray.

GPU acceleration is a given for most modern deep neural network frameworks. A dynamic neural network is one that can change from iteration to iteration, for example allowing a PyTorch model to add and remove hidden layers during training to improve its accuracy and generality. PyTorch recreates the graph on the fly at each iteration step. In contrast, TensorFlow by default creates a single dataflow graph, optimizes the graph code for performance, and then trains the model.

While eager execution mode is a fairly new option in TensorFlow, it’s the only way PyTorch runs: API calls execute when invoked, rather than being added to a graph to be run later. That might seem like it would be less computationally efficient, but PyTorch was designed to work that way, and it is no slouch when it comes to training or prediction speed.

PyTorch integrates acceleration libraries such as Intel MKL and Nvidia cuDNN and NCCL (Nvidia Collective Communications Library) to maximize speed. Its core CPU and GPU Tensor and neural network back-ends—TH (Torch), THC (Torch CUDA), THNN (Torch Neural Network), and THCUNN (Torch CUDA Neural Network)—are written as independent libraries with a C99 API. At the same time, PyTorch is not a Python binding into a monolithic C++ framework—the intention is for it to be deeply integrated with Python and to allow the use of other Python libraries.

Cost: Free open source.

Platform: Linux, MacOS, Windows; CPUs and Nvidia GPUs.

Scikit-learn

The Scikit-learn Python framework has a wide selection of robust machine learning algorithms, but no deep learning. If you’re a Python fan, Scikit-learn may well be the best option for you among the plain machine learning libraries.

Scikit-learn is a robust and well-proven machine learning library for Python with a wide assortment of well-established algorithms and integrated graphics. It is relatively easy to install, learn, and use, and it has good examples and tutorials.

On the con side, Scikit-learn does not cover deep learning or reinforcement learning, lacks graphical models and sequence prediction, and it can’t really be used from languages other than Python. It doesn’t support PyPy, the Python just-in-time compiler, or GPUs. That said, except for its minor foray into neural networks, it doesn’t really have speed problems. It uses Cython (the Python to C compiler) for functions that need to be fast, such as inner loops.

Scikit-learn has a good selection of algorithms for classification, regression, clustering, dimensionality reduction, model selection, and preprocessing. It has good documentation and examples for all of these, but lacks any kind of guided workflow for accomplishing these tasks.

Scikit-learn earns top marks for ease of development, mostly because the algorithms all work as documented, the APIs are consistent and well-designed, and there are few “impedance mismatches” between data structures. It’s a pleasure to work with a library whose features have been thoroughly fleshed out and whose bugs have been thoroughly flushed out.

On the other hand, the library does not cover deep learning or reinforcement learning, which leaves out the current hard but important problems, such as accurate image classification and reliable real-time language parsing and translation. Clearly, if you’re interested in deep learning, you should look elsewhere.

Nevertheless, there are many problems—ranging from building a prediction function linking different observations, to classifying observations, to learning the structure of an unlabeled dataset—that lend themselves to plain old machine learning without needing dozens of layers of neurons, and for those areas Scikit-learn is very good indeed.

Cost: Free open source.

Platform: Requires Python, NumPy, SciPy, and Matplotlib. Releases are available for MacOS, Linux, and Windows.

Spark MLlib

Spark MLlib, the open source machine learning library for Apache Spark, provides common machine learning algorithms such as classification, regression, clustering, and collaborative filtering (but not deep neural networks). It also includes tools for feature extraction, transformation, dimensionality reduction, and selection; tools for constructing, evaluating, and tuning machine learning pipelines; and utilities for saving and loading algorithms, models, and pipelines, for data handling, and for doing linear algebra and statistics.

Spark MLlib is written in Scala, and uses the linear algebra package Breeze. Breeze depends on netlib-java for optimized numerical processing, although in the open source distribution that means optimized use of the CPU. Databricks offers customized Spark clusters that use GPUs, which can potentially get you another 10x speed improvement for training complex machine learning models with big data.

Spark MLlib implements a truckload of common algorithms and models for classification and regression, to the point where a novice could become confused, but an expert would be likely to find a good choice of model for the data to be analyzed, eventually. To this plethora of models Spark 2.x adds the important feature of hyperparameter tuning, also known as model selection. Hyperparameter tuning allows the analyst to set up a parameter grid, an estimator, and an evaluator, and let the cross-validation method (time-consuming but accurate) or train validation split method (faster but less accurate) find the best model for the data.

Spark MLlib has full APIs for Scala and Java, mostly-full APIs for Python, and sketchy partial APIs for R. You can get a good feel for the coverage by counting the samples: 54 Java and 60 Scala machine learning examples, 52 Python machine learning examples, and only five R examples. In my experience Spark MLlib is easiest to work with using Jupyter notebooks, but you can certainly run it in a console if you tame the verbose Spark status messages.

Spark MLlib supplies pretty much anything you’d want in the way of basic machine learning, feature selection, pipelines, and persistence. It does a pretty good job with classification, regression, clustering, and filtering. Given that it is part of Spark, it has great access to databases, streams, and other data sources. On the other hand, Spark MLlib is not really set up to model and train deep neural networks in the same way as TensorFlow, PyTorch, MXNet, and Keras.

Cost: Free open source.

Platform: Spark runs on both Windows and Unix-like systems (e.g. Linux, MacOS), with Java 7 or later, Python 2.6/3.4 or later, and R 3.1 or later. For the Scala API, Spark 2.0.1 uses Scala 2.11. Spark requires Hadoop/HDFS.

TensorFlow

TensorFlow is probably the gold standard for deep neural network development, although it is not without its defects. Two of the biggest issues with TensorFlow historically were that it was too hard to learn and that it took too much code to create a model. Both issues have been addressed over the last few years.

To make TensorFlow easier to learn, the TensorFlow team has produced more learning materials as well as clarifying the existing “getting started” tutorials. A number of third parties have produced their own tutorial materials (including InfoWorld). There are now multiple TensorFlow books in print, and several online TensorFlow courses. You can even follow the CS20 course at Stanford, TensorFlow for Deep Learning Research, which posts all the slides and lecture notes online.

There are several new sections of the TensorFlow library that offer interfaces that require less programming to create and train models. These include tf.keras, which provides a TensorFlow-only version of the otherwise engine-neutral Keras package, and tf.estimator, which provides a number of high-level facilities for working with models. These include both regressors and classifiers for linear, deep neural networks, and combined linear and deep neural networks, plus a base class from which you can build your own estimators. In addition, the Dataset APIenables you to build complex input pipelines from simple, reusable pieces. You don’t have to choose just one. As this tutorial shows, you can usefully make tf.keras, tf.data.dataset, and tf.estimator work together.

TensorFlow Lite is TensorFlow’s lightweight solution for mobile and embedded devices, which enables on-device machine learning inference (but not training) with low latency and a small binary size. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API. TensorFlow Lite models are small enough to run on mobile devices, and can serve the offline use case.

The basic idea of TensorFlow Lite is that you train a full-blown TensorFlow model and convert it to the TensorFlow Lite model format. Then you can use the converted file in your mobile application on Android or iOS.

Alternatively, you can use one of the pre-trained TensorFlow Lite models for image classification or smart replies. Smart replies are contextually relevant messages that can be offered as response options; this essentially provides the same reply prediction functionality as found in Google’s Gmail clients.

Yet another option is to retrain an existing TensorFlow model against a new tagged dataset, an important technique called transfer learning, which reduces training times significantly. A hands-on tutorial on this process is called TensorFlow for Poets.

Cost: Free open source.

Platform: Ubuntu 14.04 or later, MacOS 10.11 or later, Windows 7 or later; Nvidia GPU and CUDA recommended. Most clouds now support TensorFlow with Nvidia GPUs. TensorFlow Lite runs trained models on Android and iOS.

Machine learning or deep learning?

Sometimes you know that you’ll need a deep neural network to solve a particular problem effectively, for example to classify images, recognize speech, or translate languages. Other times, you don’t know whether that’s necessary, for example to predict next month’s sales figures or to detect outliers in your data.

If you do need a deep neural network, then Keras, MXNet with Gluon, PyTorch, and TensorFlow with Keras or Estimators are all good choices. If you aren’t sure, then start with Scikit-learn or Spark MLlib and try all the relevant algorithms. If you get satisfactory results from the best model or an ensemble of several models, you can stop.

If you need better results, then try to perform transfer learning on a trained deep neural network. If you still don’t get what you need, then try building and training a deep neural network from scratch. To refine your model, try hyperparameter tuning.

No matter what method you use to train a model, remember that the model is only as good as the data you use for training. Remember to clean it, to standardize it, and to balance the sizes of your training classes.

Deep Learning vs. Conventional Machine Learning

Deep Learning vs. Conventional Machine Learning

Over the past few years, deep learning has given rise to a massive collection of ideas and techniques which are disruptive to conventional machine learning practices. However, are those ideas totally different from the traditional methods? Where are the connections and differences? What are the advantages and disadvantages? How practical are the deep learning methods for business applications? Chao will share her thoughts on those questions based on her readings and hands on experiments in the areas of text analytics (question answering system, sentiment analysis) and healthcare image classification.

Over the past few years, deep learning has given rise to a massive collection of ideas and techniques which are disruptive to conventional machine learning practices. However, are those ideas totally different from the traditional methods? Where are the connections and differences? What are the advantages and disadvantages? How practical are the deep learning methods for business applications? Chao will share her thoughts on those questions based on her readings and hands on experiments in the areas of text analytics (question answering system, sentiment analysis) and healthcare image classification.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Deep Learning and Machine Learning

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning for Front-End Developers

The Future of Machine Learning and JavaScript

Deep Learning With TensorFlow 2.0

How to get started with Python for Deep Learning and Data Science