A Guide to Uncovering Powerful Data Stories with Python

A Guide to Uncovering Powerful Data Stories with Python

Discover how you can come up with powerful visualization and data stories with Python by piggybacking on other popular data visualizations.

There are many emotional and powerful stories hidden in gobs of data just waiting to be found.

When these stories get told, they have the power to change careers, businesses, and whole groups of people.

Take Whirlpool, for example. They discovered a socio-economic problem that they could leverage with their brand.

They mined data to find a social cause to align with and discovered that every day 4,000 students drop out of school because they cannot afford to keep their clothes clean.

Whirlpool donated washers and dryers to the schools with the most at-risk children and tracked attendance.

The brand found 90% of these students had improved attendance rates and close to the same amount of children had improved class participation. The campaign was so effective that it won a number of awards, including the Cannes Lions Grand Prix for Creative Data Collection and Research.

While big brands can afford to hire award-winning creative agencies that can produce campaigns like this one, for most small businesses, that is out of the question.

One way to get into the spotlight is to find powerful stories that are yet to be discovered because of the gap that exists between marketers and data scientists.

I introduced a simple framework to do this which is around reframing already popular visualizations. The opportunity to reframe exists because marketers and developers operate in silos.

How to Uncover Powerful Data Stories with Python

As a marketer, when you handoff a data project to a developer, the first thing they do is remove the context.

The developer’s job is to generalize. But, when you get their results back, you need to add the context back so you can personalize.

Without the user context, the developer is unable to ask the right questions that can lead to making strong emotional connections.

In this article, I’m going to walk you over one example to show you how you can come up with powerful visualization and data stories by piggybacking on popular ones.

Here is our plan of action.

  • We are going to rebuild a popular data visualization from the subreddit Data is Beautiful.
  • We will collect data from public web pages (including some of it from moving charts).
  • We will reframe the visualization by asking different questions than the original author.

Our Reframed Visualization

How to Uncover Powerful Data Stories with Python

This is what our reframed visualization looks like. It shows the best Disney rides ranked by how much fun they would be for different age groups.

How to Uncover Powerful Data Stories with Python

It shows the best Disney rides compared by how long they last and how long you need to wait in line.

Our Rebuilt Visualization

How to Uncover Powerful Data Stories with Python

Our first step is to rebuild the original visualization shared in the subreddit. The data scientist shared the data sources he used, but not the code.

This gives us a great opportunity to learn how to scrape data and visualize it in Python.

I will share some code snippets as usual, but you can find all the code in this Google Colab notebook.

Extracting Our Source Data

The original visualization contains two datasets, one with the duration of the rides and another with their average wait time.

Let’s first collect the ride durations from this page https://touringplans.com/disneyland/attractions/duration.

We are going to complete these steps to extract the ride durations:

  1. Use Google Chrome to get an HTML DOM element selector with the ride durations.
  2. Use requests-html to extract the elements from the source page.
  3. Use a simple regular expression for duration numbers.

How to Uncover Powerful Data Stories with Python

r = session.get('https://touringplans.com/disneyland/attractions/duration')

ride_durations = dict()

for tr in r.html.find("#center > table > tr"): #element selected copied from Chrome Developer Tools 

  ride = tr.find("td:nth-child(1)")[0].text
  duration = tr.find("td:nth-child(2)")[0].text

  print(ride, duration)
  #parse duration
  m = re.search("([^\s]+) minutes?", duration) # this regular expression helps us extract the actual duration number

  #if m is not None:
  print(m.groups(1))

  ride_durations[ride] = float(m.groups(1)[0])

#Example ride durations extracted
#Millennium Falcon FASTPASS Kiosk 0 minutes
#('0',)
#Buzz Lightyear Astro Blasters FASTPASS Kiosk 1 minutes
#('1',)
#Fantasmic! FASTPASS Kiosk 1 minutes
#('1',)
#Gadget's Go Coaster 1 minutes
#('1',)
#Haunted Mansion FASTPASS Kiosk 1 minutes
#('1',)
#it's a small world FASTPASS Kiosk 1 minutes
#('1',)
#Matterhorn Bobsleds FASTPASS Kiosk 1 minutes
#('1',)
#Starcade 1 minutes
#('1',)
#Astro Orbitor 1.5 minutes
#('1.5',)
#Mad Tea Party 1.5 minutes

Next, we need to collect the average wait times from this page https://touringplans.com/disneyland/wait-times.

How to Uncover Powerful Data Stories with Python

This is a more challenging extraction because the data we want is in the moving charts.

We are going to complete these steps to extract the average wait times:

  1. Use requests-html to extract the JavaScript snippets from the source page.
  2. Use regular expressions to extract the data rows from the JavaScript code and also the ride name/title of the chart.
  3. Use a Jinja2 template to stich together a custom JavaScript function that returns the values we extracted in step 2.
  4. Use Py_mini_racer to execute the custom JavaScript function and get the data in Python format.
script_sel="#new_gchart_slideshow > div.forecast-viewport > ul > li > script"

all_rides = r.html.find(script_sel)

def extract_dates(data):
  start_row = "dateData.addRows\("
  end_row = "\);"

  columns = []
  title = None

  results = re.search(start_row+"([^;]+)"+end_row, data)

  if results != None:
    columns = results.group(1)

  start_title = "title: \""

  end_title = "\","

  results = re.search(start_title+'([^"]+)'+end_title, data)

  if results != None:

    title = results.group(1)

  return (title, columns)

#all_rides raw extracted text looks like this
print(all_rides[0].text)

#'$(function() { google.load("visualization", "1", {packages:["corechart"], callback: drawChart137}); }); function drawChart137() { // Create and populate the data table. var dateData = new google.visualization.DataTable(); dateData.addColumn(\'datetime\', \'time\'); //dateData.addColumn(\'number\', \'null\'); dateData.addColumn(\'number\', \'Wait Times the Crowd Calendar Predicted\'); dateData.addRows([[new Date( 2019,6,15,08,00,00 ), 4],[new Date( 2019,6,15,08,15,00 ), 4],[new Date( 2019,6,15,08,30,00 ), 4],[new Date( 2019,6,15,08,45,00 ), 4],[new Date( 2019,6,15,09,00,00 ), 12],[new Date( 2019,6,15,09,15,00 ), 6],[new Date( 2019,6,15,09,30,00 ), 7],[new Date( 2019,6,15,09,45,00 ), 14],[new Date( 2019,6,15,10,00,00 ), 16],[new Date( 2019,6,15,10,15,00 ), 16],[new Date( 2019,6,15,10,30,00 ), 16],[new Date( 2019,6,15,10,45,00 ), 16],[new Date( 2019,6,15,11,00,00 ), 16],[new Date( 2019,6,15,11,15,00 ), 23],[new Date( 2019,6,15,11,30,00 ), 21],[new Date( 2019,6,15,11,45,00 ), 16],[new Date( 2019,6,15,12,00,00 ), 26],[new Date( 2019,6,15,12,15,00 ), 26],[new Date( 2019,6,15,12,30,00 ), 26],[new Date( 2019,6,15,12,45,00 ), 25],[new Date( 2019,6,15,13,00,00 ), 25],[new Date( 2019,6,15,14,30,00 ), 18],[new Date( 2019,6,15,14,45,00 ), 23],[new Date( 2019,6,15,15,00,00 ), 25],[new Date( 2019,6,15,15,15,00 ), 20],[new Date( 2019,6,15,15,30,00 ), 18],[new Date( 2019,6,15,15,45,00 ), 16],[new Date( 2019,6,15,16,00,00 ), 16],[new Date( 2019,6,15,16,15,00 ), 18],[new Date( 2019,6,15,16,30,00 ), 18],[new Date( 2019,6,15,16,45,00 ), 18],[new Date( 2019,6,15,17,00,00 ), 18],[new Date( 2019,6,15,17,15,00 ), 19],[new Date( 2019,6,15,17,30,00 ), 19],[new Date( 2019,6,15,17,45,00 ), 19],[new Date( 2019,6,15,18,00,00 ), 19],[new Date( 2019,6,15,18,15,00 ), 20],[new Date( 2019,6,15,18,30,00 ), 20],[new Date( 2019,6,15,18,45,00 ), 24],[new Date( 2019,6,15,19,00,00 ), 24],[new Date( 2019,6,15,19,15,00 ), 25],[new Date( 2019,6,15,19,30,00 ), 21],[new Date( 2019,6,15,19,45,00 ), 21],[new Date( 2019,6,15,20,00,00 ), 21],[new Date( 2019,6,15,20,15,00 ), 21],[new Date( 2019,6,15,20,30,00 ), 21],[new Date( 2019,6,15,20,45,00 ), 21],[new Date( 2019,6,15,21,00,00 ), 21],[new Date( 2019,6,15,21,15,00 ), 21],[new Date( 2019,6,15,21,30,00 ), 21],[new Date( 2019,6,15,21,45,00 ), 21],[new Date( 2019,6,15,22,00,00 ), 21],[new Date( 2019,6,15,22,15,00 ), 25],[new Date( 2019,6,15,22,30,00 ), 12],[new Date( 2019,6,15,22,45,00 ), 12],[new Date( 2019,6,15,23,00,00 ), 12],[new Date( 2019,6,15,23,15,00 ), 12],[new Date( 2019,6,15,23,30,00 ), 12],[new Date( 2019,6,15,23,45,00 ), 12]]); var userOptions = { title: "Alice in Wonderland - 7/15/19", series: [ // CC Predicted forecasts {color: \'blue\', pointSize: 0, visibleInLegend: true, lineWidth:2 }, ], width: $(\'div#center.column\').width(), height: 320, chartArea:{left:30,top:50,width:"80%",height:"70%"}, legend: {position: \'top\', textStyle: {color: \'black\', fontSize: 11}}, fontSize:10, hAxis: { slantedText: true, slantedTextAngle: 45, viewWindowMode:\'pretty\', format: \'h aa\', maxValue: new Date(2019,6,16,00,00,00) }, vAxis: {format: \'0\', title: \'minutes\', maxValue: 31} // Nothing specified for axis 0 }; var userChart = new google.visualization.AreaChart(document.getElementById(\'google_chart_137\')); userChart.draw(dateData, userOptions); } $("#new_gchart_slideshow").width( $("div#center.column").width() );'

#After we process it, we get a cleaner dataset
print(extract_dates(all_rides[0].text))

#('Alice in Wonderland - 7/15/19',
#'[[new Date( 2019,6,15,08,00,00 ), 4],[new Date( 2019,6,15,08,15,00 ), 4],[new Date( 2019,6,15,08,30,00 ), 4],[new Date( 2019,6,15,08,45,00 ), 4],[new Date( 2019,6,15,09,00,00 ), 12],[new Date( 2019,6,15,09,15,00 ), 6],[new Date( 2019,6,15,09,30,00 ), 7],[new Date( 2019,6,15,09,45,00 ), 14],[new Date( 2019,6,15,10,00,00 ), 16],[new Date( 2019,6,15,10,15,00 ), 16],[new Date( 2019,6,15,10,30,00 ), 16],[new Date( 2019,6,15,10,45,00 ), 16],[new Date( 2019,6,15,11,00,00 ), 16],[new Date( 2019,6,15,11,15,00 ), 23],[new Date( 2019,6,15,11,30,00 ), 21],[new Date( 2019,6,15,11,45,00 ), 16],[new Date( 2019,6,15,12,00,00 ), 26],[new Date( 2019,6,15,12,15,00 ), 26],[new Date( 2019,6,15,12,30,00 ), 26],[new Date( 2019,6,15,12,45,00 ), 25],[new Date( 2019,6,15,13,00,00 ), 25],[new Date( 2019,6,15,14,30,00 ), 18],[new Date( 2019,6,15,14,45,00 ), 23],[new Date( 2019,6,15,15,00,00 ), 25],[new Date( 2019,6,15,15,15,00 ), 20],[new Date( 2019,6,15,15,30,00 ), 18],[new Date( 2019,6,15,15,45,00 ), 16],[new Date( 2019,6,15,16,00,00 ), 16],[new Date( 2019,6,15,16,15,00 ), 18],[new Date( 2019,6,15,16,30,00 ), 18],[new Date( 2019,6,15,16,45,00 ), 18],[new Date( 2019,6,15,17,00,00 ), 18],[new Date( 2019,6,15,17,15,00 ), 19],[new Date( 2019,6,15,17,30,00 ), 19],[new Date( 2019,6,15,17,45,00 ), 19],[new Date( 2019,6,15,18,00,00 ), 19],[new Date( 2019,6,15,18,15,00 ), 20],[new Date( 2019,6,15,18,30,00 ), 20],[new Date( 2019,6,15,18,45,00 ), 24],[new Date( 2019,6,15,19,00,00 ), 24],[new Date( 2019,6,15,19,15,00 ), 25],[new Date( 2019,6,15,19,30,00 ), 21],[new Date( 2019,6,15,19,45,00 ), 21],[new Date( 2019,6,15,20,00,00 ), 21],[new Date( 2019,6,15,20,15,00 ), 21],[new Date( 2019,6,15,20,30,00 ), 21],[new Date( 2019,6,15,20,45,00 ), 21],[new Date( 2019,6,15,21,00,00 ), 21],[new Date( 2019,6,15,21,15,00 ), 21],[new Date( 2019,6,15,21,30,00 ), 21],[new Date( 2019,6,15,21,45,00 ), 21],[new Date( 2019,6,15,22,00,00 ), 21],[new Date( 2019,6,15,22,15,00 ), 25],[new Date( 2019,6,15,22,30,00 ), 12],[new Date( 2019,6,15,22,45,00 ), 12],[new Date( 2019,6,15,23,00,00 ), 12],[new Date( 2019,6,15,23,15,00 ), 12],[new Date( 2019,6,15,23,30,00 ), 12],[new Date( 2019,6,15,23,45,00 ), 12]]')

In order to convert the JavaScript data embedded in the charts to Python, we are going to perform a clever trick.

We are going to stitch together JavaScript functions using fragments of the code we are scraping.

We will use delimiters to define which fragments we will extract and use a Jinja2 template to work them together in a JavaScript function that runs correctly. The function will return a dictionary with the duration of our rides.

We will execute such functions using an obscure library called Py_mini_racer. That library runs JavaScript code from Python, returning Python objects that we can use.

I tried to use the PyV8 engine from Google, but couldn’t get it to work. It seems the project has been abandoned.

from jinja2 import Template

js_template="""
function drawChart137() {

  var columns = {{columns}}; 
  var title = "{{title}}";
  return [title, columns];
}
 """

template=Template(js_template)

js_script=template.render(title=title, columns = columns)

from py_mini_racer import py_mini_racer
ctx = py_mini_racer.MiniRacer()
ctx.eval(js_script)
title, columns = ctx.call("drawChart137")

def build_dict(script_list):

  wait_times = dict()
  ctx = py_mini_racer.MiniRacer()

  for script in script_list:
    #title and columns are in JavaScript format
    title, columns = extract_dates(script.text)

    print(title)

    js_template="""
      function drawChart137() {
        var columns = {{columns}}; 
        var title = "{{title}}";
        return [title, columns];
      }
       """

    template=Template(js_template)

    #generate JavaScript dynamically and execute it
    js_script=template.render(title=title, columns = columns)


    ctx.eval(js_script)

    #title and columns are Python objects
    title, columns = ctx.call("drawChart137")

    #build up wait times dictionary for each ride
    wait_times[title] = columns

  return wait_times

wait_times_by_ride = build_dict(all_rides)

Now, we have the two datasets we need to produce our chart, but there is some processing we need to do first.

Processing Our Source Data

We need to combine the datasets we scraped, clean them up, calculate average, etc.

We are going to complete these steps:

  1. Split the extracted dataset into two Python dictionaries. One with the timestamps and one with the wait times per ride.
  2. Filter rides with fewer than 64 data points to keep the same number of data rows per ride.
  3. Calculate the average number of wait time per ride.
  4. Combine average wait time per ride and ride duration into one data frame.
  5. Eliminate rows with empty columns.
average_wait=pd.Series(df.mean(), name="Average Wait Time")
print(average_wait)

#example output
#Astro Orbitor                                     12.453125
#Buzz Lightyear Astro Blasters                     17.953125

duration = pd.Series(ride_durations, name="Ride Duration")
print(duration)

#example output
#Millennium Falcon FASTPASS Kiosk                                   0.00
#Buzz Lightyear Astro Blasters FASTPASS Kiosk                       1.00

pd.concat([average_wait, duration], axis=1)

#example output
#                                       Average Wait  Time    Ride Duration
# "it's a small world" Holiday Lighting    NaN                    5.00
# A Christmas Fantasy Parade              NaN                    27.00
# Alice in Wonderland                      NaN                    4.00
# Astro Orbitor                            12.453125              1.50

#We need to remove rows with null values
ride_by_wait_time_duration = pd.concat([average_wait, duration], axis=1).dropna()
print(ride_by_wait_time_duration)

Here is what the final data frame looks like.

How to Uncover Powerful Data Stories with Python

Visualizing Our Data

We are almost in the finish line. In this step, we get to do the fun part! Visualizing the data frame we created.

We are going to complete these steps:

  1. Convert pandas data frame to a row-oriented dictionary. The X-axis is the Average Wait Time and the Y-axis is Ride Duration. The label is the Ride name.
  2. Use Plotly to generate a labeled scatter plot.
#!pip install plotly-express

import pandas as pd
from urllib.parse import urlparse
from collections import Counter
import plotly.express as px
import plotly
import plotly.graph_objects as go

fig = px.scatter(df, x="Average Wait Time", y="Ride Duration", color="Ride", symbol="Ride", height=900, width=1200)

annotations = []


show_arrows = ["Disneyland Railroad Tomorrowland Station", 
               "Disneyland Railroad New Orleans Square Station", 
               "Disneyland Railroad Main Street Station",
               "Mr. Toad's Wild Ride"]


for row in df.to_dict(orient="rows"):
    if row['Ride'] in show_arrows:

        showarrow = True
    else:
        showarrow = False

    annotations.append(go.layout.Annotation(
        x=row['Average Wait Time'],
        y=row['Ride Duration'],
        text=row['Ride'],
        showarrow=showarrow
    ))


fig.update_layout(
    showlegend=True,
    annotations=annotations
)

fig.update_layout(legend_orientation="h")
fig.update_traces(textposition='top right')

fig.show(config={'editable': True})
plotly.offline.plot(fig, filename='ridetimes.html')

You need to manually drag the labels around to make them more legible.

How to Uncover Powerful Data Stories with Python

We finally have a visualization that closely resembles the original one we found on Reddit.

In our final step, we will produce an original visualization built from the same data we collected for this one.

Reframing Our Data

Rebuilding the original visualization took serious work and we are not producing anything new. We will address that in this final section.

The original visualization lacked an emotional hook. What if the rides are not fun for me?

We will pull an additional dataset: the ratings per ride by different age groups. This will help us visualize not sure the best rides that will have less wait time, but also which ones would be more fun for a particular age group.

We are going to complete these steps to reframe the original visualization:

  1. We want to know which age groups will have the most fun per ride.
  2. We will fetch the average ride ratings per age group from https://touringplans.com/disneyland/attractions.
  3. We will calculate an “Enjoyment Score” per ride and age group, which is the number of minutes per ride divided by average minutes of wait time.
  4. We will use Plotly to display a bar chart with the results.

How to Uncover Powerful Data Stories with Python

This is the page with our extra data.

# Get Disney Attractiveness Ratings
import requests
from bs4 import BeautifulSoup

r = requests.get("https://touringplans.com/disneyland/attractions")
soup = BeautifulSoup(r.text)

rows = []
table = soup.find("table")
for idx, tr in enumerate(table.findAll("tr")):
    if idx % 2 == 0:
        row = []
        row.append(tr.findAll("td")[0].find("a").text.strip())
        row.append(tr.findAll("td")[1].text.strip())
        row.append(tr.findAll("td")[2].text.strip())
        row.append(tr.findAll("td")[3].text.strip())
        row.append(tr.findAll("td")[4].text.strip())
        row.append(tr.findAll("td")[5].text.strip())
        row.append(tr.findAll("td")[6].text.strip())
        rows.append(row)

appeal = pd.DataFrame(rows, columns=["Ride", "Pre-K", "Grade School", "Teens", "Young Adults", "Over 30", "Seniors"])
for column in appeal.columns[1:]:
    appeal[column] = pd.to_numeric(appeal[column])

rides = df.merge(appeal, on="Ride", how="left").sort_values("Seniors")

We scrape it just like we pulled the ride durations.

# Data taken from here: https://touringplans.com/disneyland/attractions
# Ties in attraction ratings were resolved by selecting the age group closest to the next-highest score

df['Age Group'] = ["Pre-K", "Grade School", "Seniors", 
                   "Pre-K", "Seniors", "Seniors", 
                   "Pre-K", "Grade School", "Over 30",
                   "Seniors", "Pre-K", "Pre-K", 
                   "Teens", "Teens", "Seniors", 
                   "Young Adults", "Young Adults", "Teens",
                   "Pre-K", "Pre-K"]

df['Enjoyment Score'] = df['Ride Duration'] / df['Average Wait Time'] #optimality???
df.sort_values("Enjoyment Score", ascending=False)

Let’s summarize the original data frame using a new metric: an Enjoyment Score. 🙂

We define it as the average duration by wait time. The bigger the number, the more fun we should have as we have to wait less in line.

This is what the updated data frame looks like with our new Enjoyment Score metric.

How to Uncover Powerful Data Stories with Python

Now, let’s visualize it.

data = []
ages = ["Pre-K", "Grade School", "Teens", "Young Adults", "Over 30", "Seniors"]

top_5 = []
for i in range(5):
    scores = []
    for age in ages:
        scores.append(tuple(rides.sort_values(age, ascending=False)[[age, "Ride"]].iloc[i].values))
    top_5.append(scores)

for scores in top_5:
#     bar = go.Bar(showlegend=False, text=["{}: {}".format(x[1], x[0]) for x in scores], textposition="auto", x=ages, y=[x[0] for x in scores])
    bar = go.Bar(showlegend=False, text=[x[1] for x in scores], textposition="auto", x=ages, y=[x[0] for x in scores])

    data.append(bar)


fig = go.Figure(data=data)
# Change the bar mode
fig.update_layout(barmode='group', width=1400)
fig.show()

Finally, we get this beautiful and super valuable visualization.

How to Uncover Powerful Data Stories with Python

Resources & Community Projects

Last January, I received an email that kickstarted my “Python crusade”. Braintree had rejected RankSense’s application for a merchant account because they saw SEO as a high-risk category.

Right next to fortune tellers, mail-order brides and “get rich quick” schemes!

We had worked on the integration for three weeks. I felt really mad and embarrassed.

I had been enjoying my time in the data science and AI community last year. I was learning a lot of cool stuff and having fun.

I’ve been in the SEO space for probably too long. Sadly, my generation made the big mistake of letting speculation and magic tricks rule the perception of what SEO is about.

As a result of this, too many businesses have fallen prey to charlatans.

I had the choice to leave the SEO community or try to encourage the new generation to drive change so our community could be a fun and proud place to be.

I decided to stay, but I was afraid that trying to drive change by myself with minimal social presence would be impossible.

Fortunately, I watched this powerful video, wrote this sort of manifesto, and put my head down to write practical Python articles every month.

I’m excited to see that in less than six months, Python is everywhere in the SEO community and the momentum keeps growing.

I’m really excited about our community and the brilliant future ahead.

Now, let me continue to bring light to the awesome projects we continue to churn out each month. So, exciting to see more people joining the Python bandwagon. 🐍 🔥

Tyler shared a project to auto-generate meta descriptions using a Text Rank summarizer.

I have a colab notebook for auto-generating meta descriptions with TextRank!

— Tyler Reardon (@TylerReardon) September 20, 2019

Hugo shared his first script that automates exporting SEMrush reports.

My first ever Python script : https://t.co/gyHbhRVtkI

It allows you to automate Semrush organic reports exports. I'm open to any feedbacks and suggestions

— Hugo Akh (@hugodeuxfois) September 21, 2019

Jeffrey is working on an AI tool to break the writer’s block and open-sourced his Python backend.

I've been working on a writing tool that uses AI to help beat writer's block! https://t.co/Y4fItfoLgS (Python backend open sourced at https://t.co/Bk8pG5aHEc)

Thank you! I find marketing pretty awkward/hard for me.

— Jeffrey Shek (@shekkery) September 20, 2019

Charly is working on a URL translator and classifier.

Hey @hamletbatista! I'm working on a URL translator/classifier built with #Python & interacting with @googlesheets. It's not 100% yet, but our chat @brightonseo gave me confidence that I should stop worrying about the code being perfect and start sharing it!😁

— Charly Wargnier 🇪🇺 (@DataChaz) September 20, 2019

python database data-science

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

50 Data Science Jobs That Opened Just Last Week

Data Science and Analytics market evolves to adapt to the constantly changing economic and business environments. Our latest survey report suggests that as the overall Data Science and Analytics market evolves to adapt to the constantly changing economic and business environments, data scientists and AI practitioners should be aware of the skills and tools that the broader community is working on. A good grip in these skills will further help data science enthusiasts to get the best jobs that various industries in their data science functions are offering.

Basic Data Types in Python | Python Web Development For Beginners

In the programming world, Data types play an important role. Each Variable is stored in different data types and responsible for various functions. Python had two different objects, and They are mutable and immutable objects.

Data Science Course in Dallas

Become a data analysis expert using the R programming language in this [data science](https://360digitmg.com/usa/data-science-using-python-and-r-programming-in-dallas "data science") certification training in Dallas, TX. You will master data...

Applications Of Data Science On 3D Imagery Data

The agenda of the talk included an introduction to 3D data, its applications and case studies, 3D data alignment and more.

Python For Data Science - How to use Data Science with Python

This Edureka video on 'Python For Data Science - How to use Data Science with Python - Data Science using Python ' will help you understand how we can use python for data science along with various use cases. What is Data Science? Why Python? Python Libraries For Data Science. Roadmap To Data Science With Python. Data Science Jobs and Salary Trends