This is the second part in a two-part series. I suggest you read the part 1 for better understanding.

In the first part of the series, we dealt extensively with text-preprocessing using NLTK and some manual processes; defining our model architecture; and training and evaluating a model, which we found good enough to be deployed based on the dataset we trained the model on.

Content

  • Deployment Environment Setup
  • Project File Structure
  • Scripting
  • Demo with Streamlit

Deployment Environment Setup

Our next step is to reproduce the essential processes in production so that are able to synchronize expected outputs on new text inputs. We’ll start by converting the Notebook into scripts and modules in a different project environment, with necessary versions of libraries and frameworks installed.

As part of the setup required for writing production code, it’s a good standard practice to always have a different virtual environment for each project, in order to avoid dependencies issues. We can create and activate this in the command line using the commands below, having navigated to your project’s root folder:

#create python virtual environment 
python -m venv name_of_your_virtual_environment 
#activate the environment
name_of_your_virtual_environment\Scripts\activate

For brief clarification on how to set-up a Python virtual environment, you can make reference to this blog post:

Deployment of Machine learning Models Demystified (Part 2)

Loan acceptance status prediction with risk-free loanable amount

medium.com

The requirements.txt files below contain libraries with the corresponding versions used for the project. This can all be installed in the virtual environment created earlier, with the command pip install -r requirements.txt

#machine-learning #heartbeat #chatbots #tensorflow #nlp #deep learning

Building a Conversational Chatbot with NLTK and TensorFlow
10.30 GEEK