AutoTextSize: Make Text Fit Container, Prevent Overflow and Underflow

AutoTextSize

Make text fit container, prevent overflow and underflow.

The font size of the text is adjusted so that it precisely fills its container. It uses computed width and height so it works for all types of fonts and automatically re-runs when the element resizes.

Live demo.

Single-line mode

The text fills the width of the container, without wrapping to more than one line.

Multi-line mode

The text fills both the width and the height of the container, allowing wrapping to multiple lines.

React component

The AutoTextSize component automatically re-runs when children changes and when the element resizes.

import { AutoTextSize } from 'auto-text-size'

export const Title = ({ text }) => {
  return (
    <div style={{ maxWidth: '60%', margin: '0 auto' }}>
      <AutoTextSize>{text}</AutoTextSize>
    </div>
  )
}

AutoTextSize props

NameTypeDefaultDescription
multilinebooleanfalseAllow text to wrap to multiple lines.
minFontSizePxnumber8The minimum font size to be used.
maxFontSizePxnumber160The maximum font size to be used.
fontSizePrecisionPxnumber0.1The algorithm stops when reaching the precision.
asstring | ReactComponent'div'The underlying component that AutoTextSize will use.

Vanilla function

Zero dependencies.

import { autoTextSize } from 'auto-text-size'

// autoTextSize runs the returned function directly and
// re-runs it when the container element resize.
const updateTextSize = autoTextSize(options)

// All invocations are throttled for performance. Manually
// call this if the content changes and needs to re-adjust.
updateTextSize()

// Disconnect the resize observer when done.
updateTextSize.disconnect()

One-off:

import { updateTextSize } from 'auto-text-size'

updateTextSize(options)

autoTextSize options

NameTypeDefaultDescription
innerElHTMLElement The inner element to be auto sized.
containerElHTMLElement The container element defines the dimensions.
multilinebooleanfalseAllow text to wrap to multiple lines.
minFontSizePxnumber8The minimum font size to be used.
maxFontSizePxnumber160The maximum font size to be used.
fontSizePrecisionPxnumber0.1The algorithm stops when reaching the precision.

Details

  • The single-line algorithm predicts how the browser will render text in a different font size and iterates until converging within fontSizePrecisionPx (usually 1-2 iterations).
  • The multi-line algorithm performs a binary search among the possible font sizes until converging within fontSizePrecisionPx (usually ~10 iterations). There is no reliable way of predicting how the browser will render text in a different font size when multi-line text wrap is at play.
  • Performance. Each iteration has a performance hit since it triggers a layout reflow. Multiple mesures are taken to minimize the performance impact. As few iterations as possible are executed, throttling is performed using requestAnimationFrame and ResizeObserver is used to recompute text size only when needed.
  • No overflow. After converging, the algorithm runs a second loop to ensure that no overflow occurs. Underflow is preferred since it doesn't look visually broken like overflow does. Some browsers (eg. Safari) are not good with sub-pixel font sizing, making it so that significant visual overflow can occur unless we adjust for it.
  • Font size is used rather than the scale() CSS function since it is simple and works very well. The the scale() function wouldn't support multi-line text wrap and it tends to make text blurry in some browsers.

Developing

When developing one typically wants to see the output in the example application without having to publish and reinstall. This is achieved by linking the local package into the example app.

Because of issues with yarn link, Yalc is used instead. A linking approach is preferred over yarn workspaces since we want to use the package as it would appear in the real world.

npm i yalc -g
yarn
yarn watch

# Other terminal
cd example
yarn
yalc link auto-text-size
yarn dev

Yalc and HMR

Using yalc link (or yalc add--link) makes it so that Next.js HMR detects updates instantly.

Publishing

# Update version number
yarn clean && yarn build
npm publish

Author: sanalabs
Source code: https://github.com/sanalabs/auto-text-size
License: MIT license
#react #javascript #typescript
 

What is GEEK

Buddha Community

AutoTextSize: Make Text Fit Container, Prevent Overflow and Underflow

Navigating Between DOM Nodes in JavaScript

In the previous chapters you've learnt how to select individual elements on a web page. But there are many occasions where you need to access a child, parent or ancestor element. See the JavaScript DOM nodes chapter to understand the logical relationships between the nodes in a DOM tree.

DOM node provides several properties and methods that allow you to navigate or traverse through the tree structure of the DOM and make changes very easily. In the following section we will learn how to navigate up, down, and sideways in the DOM tree using JavaScript.

Accessing the Child Nodes

You can use the firstChild and lastChild properties of the DOM node to access the first and last direct child node of a node, respectively. If the node doesn't have any child element, it returns null.

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");
console.log(main.firstChild.nodeName); // Prints: #text

var hint = document.getElementById("hint");
console.log(hint.firstChild.nodeName); // Prints: SPAN
</script>

Note: The nodeName is a read-only property that returns the name of the current node as a string. For example, it returns the tag name for element node, #text for text node, #comment for comment node, #document for document node, and so on.

If you notice the above example, the nodeName of the first-child node of the main DIV element returns #text instead of H1. Because, whitespace such as spaces, tabs, newlines, etc. are valid characters and they form #text nodes and become a part of the DOM tree. Therefore, since the <div> tag contains a newline before the <h1> tag, so it will create a #text node.

To avoid the issue with firstChild and lastChild returning #text or #comment nodes, you could alternatively use the firstElementChild and lastElementChild properties to return only the first and last element node, respectively. But, it will not work in IE 9 and earlier.

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");
alert(main.firstElementChild.nodeName); // Outputs: H1
main.firstElementChild.style.color = "red";

var hint = document.getElementById("hint");
alert(hint.firstElementChild.nodeName); // Outputs: SPAN
hint.firstElementChild.style.color = "blue";
</script>

Similarly, you can use the childNodes property to access all child nodes of a given element, where the first child node is assigned index 0. Here's an example:

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");

// First check that the element has child nodes 
if(main.hasChildNodes()) {
    var nodes = main.childNodes;
    
    // Loop through node list and display node name
    for(var i = 0; i < nodes.length; i++) {
        alert(nodes[i].nodeName);
    }
}
</script>

The childNodes returns all child nodes, including non-element nodes like text and comment nodes. To get a collection of only elements, use children property instead.

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");

// First check that the element has child nodes 
if(main.hasChildNodes()) {
    var nodes = main.children;
    
    // Loop through node list and display node name
    for(var i = 0; i < nodes.length; i++) {
        alert(nodes[i].nodeName);
    }
}
</script>

#javascript 

Dylan  Iqbal

Dylan Iqbal

1561523460

Matplotlib Cheat Sheet: Plotting in Python

This Matplotlib cheat sheet introduces you to the basics that you need to plot your data with Python and includes code samples.

Data visualization and storytelling with your data are essential skills that every data scientist needs to communicate insights gained from analyses effectively to any audience out there. 

For most beginners, the first package that they use to get in touch with data visualization and storytelling is, naturally, Matplotlib: it is a Python 2D plotting library that enables users to make publication-quality figures. But, what might be even more convincing is the fact that other packages, such as Pandas, intend to build more plotting integration with Matplotlib as time goes on.

However, what might slow down beginners is the fact that this package is pretty extensive. There is so much that you can do with it and it might be hard to still keep a structure when you're learning how to work with Matplotlib.   

DataCamp has created a Matplotlib cheat sheet for those who might already know how to use the package to their advantage to make beautiful plots in Python, but that still want to keep a one-page reference handy. Of course, for those who don't know how to work with Matplotlib, this might be the extra push be convinced and to finally get started with data visualization in Python. 

You'll see that this cheat sheet presents you with the six basic steps that you can go through to make beautiful plots. 

Check out the infographic by clicking on the button below:

Python Matplotlib cheat sheet

With this handy reference, you'll familiarize yourself in no time with the basics of Matplotlib: you'll learn how you can prepare your data, create a new plot, use some basic plotting routines to your advantage, add customizations to your plots, and save, show and close the plots that you make.

What might have looked difficult before will definitely be more clear once you start using this cheat sheet! Use it in combination with the Matplotlib Gallery, the documentation.

Matplotlib 

Matplotlib is a Python 2D plotting library which produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms.

Prepare the Data 

1D Data 

>>> import numpy as np
>>> x = np.linspace(0, 10, 100)
>>> y = np.cos(x)
>>> z = np.sin(x)

2D Data or Images 

>>> data = 2 * np.random.random((10, 10))
>>> data2 = 3 * np.random.random((10, 10))
>>> Y, X = np.mgrid[-3:3:100j, -3:3:100j]
>>> U = 1 X** 2 + Y
>>> V = 1 + X Y**2
>>> from matplotlib.cbook import get_sample_data
>>> img = np.load(get_sample_data('axes_grid/bivariate_normal.npy'))

Create Plot

>>> import matplotlib.pyplot as plt

Figure 

>>> fig = plt.figure()
>>> fig2 = plt.figure(figsize=plt.figaspect(2.0))

Axes 

>>> fig.add_axes()
>>> ax1 = fig.add_subplot(221) #row-col-num
>>> ax3 = fig.add_subplot(212)
>>> fig3, axes = plt.subplots(nrows=2,ncols=2)
>>> fig4, axes2 = plt.subplots(ncols=3)

Save Plot 

>>> plt.savefig('foo.png') #Save figures
>>> plt.savefig('foo.png',  transparent=True) #Save transparent figures

Show Plot

>>> plt.show()

Plotting Routines 

1D Data 

>>> fig, ax = plt.subplots()
>>> lines = ax.plot(x,y) #Draw points with lines or markers connecting them
>>> ax.scatter(x,y) #Draw unconnected points, scaled or colored
>>> axes[0,0].bar([1,2,3],[3,4,5]) #Plot vertical rectangles (constant width)
>>> axes[1,0].barh([0.5,1,2.5],[0,1,2]) #Plot horiontal rectangles (constant height)
>>> axes[1,1].axhline(0.45) #Draw a horizontal line across axes
>>> axes[0,1].axvline(0.65) #Draw a vertical line across axes
>>> ax.fill(x,y,color='blue') #Draw filled polygons
>>> ax.fill_between(x,y,color='yellow') #Fill between y values and 0

2D Data 

>>> fig, ax = plt.subplots()
>>> im = ax.imshow(img, #Colormapped or RGB arrays
      cmap= 'gist_earth', 
      interpolation= 'nearest',
      vmin=-2,
      vmax=2)
>>> axes2[0].pcolor(data2) #Pseudocolor plot of 2D array
>>> axes2[0].pcolormesh(data) #Pseudocolor plot of 2D array
>>> CS = plt.contour(Y,X,U) #Plot contours
>>> axes2[2].contourf(data1) #Plot filled contours
>>> axes2[2]= ax.clabel(CS) #Label a contour plot

Vector Fields 

>>> axes[0,1].arrow(0,0,0.5,0.5) #Add an arrow to the axes
>>> axes[1,1].quiver(y,z) #Plot a 2D field of arrows
>>> axes[0,1].streamplot(X,Y,U,V) #Plot a 2D field of arrows

Data Distributions 

>>> ax1.hist(y) #Plot a histogram
>>> ax3.boxplot(y) #Make a box and whisker plot
>>> ax3.violinplot(z)  #Make a violin plot

Plot Anatomy & Workflow 

Plot Anatomy 

 y-axis      

                           x-axis 

Workflow 

The basic steps to creating plots with matplotlib are:

1 Prepare Data
2 Create Plot
3 Plot
4 Customized Plot
5 Save Plot
6 Show Plot

>>> import matplotlib.pyplot as plt
>>> x = [1,2,3,4]  #Step 1
>>> y = [10,20,25,30] 
>>> fig = plt.figure() #Step 2
>>> ax = fig.add_subplot(111) #Step 3
>>> ax.plot(x, y, color= 'lightblue', linewidth=3)  #Step 3, 4
>>> ax.scatter([2,4,6],
          [5,15,25],
          color= 'darkgreen',
          marker= '^' )
>>> ax.set_xlim(1, 6.5)
>>> plt.savefig('foo.png' ) #Step 5
>>> plt.show() #Step 6

Close and Clear 

>>> plt.cla()  #Clear an axis
>>> plt.clf(). #Clear the entire figure
>>> plt.close(). #Close a window

Plotting Customize Plot 

Colors, Color Bars & Color Maps 

>>> plt.plot(x, x, x, x**2, x, x** 3)
>>> ax.plot(x, y, alpha = 0.4)
>>> ax.plot(x, y, c= 'k')
>>> fig.colorbar(im, orientation= 'horizontal')
>>> im = ax.imshow(img,
            cmap= 'seismic' )

Markers 

>>> fig, ax = plt.subplots()
>>> ax.scatter(x,y,marker= ".")
>>> ax.plot(x,y,marker= "o")

Linestyles 

>>> plt.plot(x,y,linewidth=4.0)
>>> plt.plot(x,y,ls= 'solid') 
>>> plt.plot(x,y,ls= '--') 
>>> plt.plot(x,y,'--' ,x**2,y**2,'-.' ) 
>>> plt.setp(lines,color= 'r',linewidth=4.0)

Text & Annotations 

>>> ax.text(1,
           -2.1, 
           'Example Graph', 
            style= 'italic' )
>>> ax.annotate("Sine", 
xy=(8, 0),
xycoords= 'data', 
xytext=(10.5, 0),
textcoords= 'data', 
arrowprops=dict(arrowstyle= "->", 
connectionstyle="arc3"),)

Mathtext 

>>> plt.title(r '$sigma_i=15$', fontsize=20)

Limits, Legends and Layouts 

Limits & Autoscaling 

>>> ax.margins(x=0.0,y=0.1) #Add padding to a plot
>>> ax.axis('equal')  #Set the aspect ratio of the plot to 1
>>> ax.set(xlim=[0,10.5],ylim=[-1.5,1.5])  #Set limits for x-and y-axis
>>> ax.set_xlim(0,10.5) #Set limits for x-axis

Legends 

>>> ax.set(title= 'An Example Axes',  #Set a title and x-and y-axis labels
            ylabel= 'Y-Axis', 
            xlabel= 'X-Axis')
>>> ax.legend(loc= 'best')  #No overlapping plot elements

Ticks 

>>> ax.xaxis.set(ticks=range(1,5),  #Manually set x-ticks
             ticklabels=[3,100, 12,"foo" ])
>>> ax.tick_params(axis= 'y', #Make y-ticks longer and go in and out
             direction= 'inout', 
              length=10)

Subplot Spacing 

>>> fig3.subplots_adjust(wspace=0.5,   #Adjust the spacing between subplots
             hspace=0.3,
             left=0.125,
             right=0.9,
             top=0.9,
             bottom=0.1)
>>> fig.tight_layout() #Fit subplot(s) in to the figure area

Axis Spines 

>>> ax1.spines[ 'top'].set_visible(False) #Make the top axis line for a plot invisible
>>> ax1.spines['bottom' ].set_position(( 'outward',10))  #Move the bottom axis line outward

Have this Cheat Sheet at your fingertips

Original article source at https://www.datacamp.com

#matplotlib #cheatsheet #python

AutoTextSize: Make Text Fit Container, Prevent Overflow and Underflow

AutoTextSize

Make text fit container, prevent overflow and underflow.

The font size of the text is adjusted so that it precisely fills its container. It uses computed width and height so it works for all types of fonts and automatically re-runs when the element resizes.

Live demo.

Single-line mode

The text fills the width of the container, without wrapping to more than one line.

Multi-line mode

The text fills both the width and the height of the container, allowing wrapping to multiple lines.

React component

The AutoTextSize component automatically re-runs when children changes and when the element resizes.

import { AutoTextSize } from 'auto-text-size'

export const Title = ({ text }) => {
  return (
    <div style={{ maxWidth: '60%', margin: '0 auto' }}>
      <AutoTextSize>{text}</AutoTextSize>
    </div>
  )
}

AutoTextSize props

NameTypeDefaultDescription
multilinebooleanfalseAllow text to wrap to multiple lines.
minFontSizePxnumber8The minimum font size to be used.
maxFontSizePxnumber160The maximum font size to be used.
fontSizePrecisionPxnumber0.1The algorithm stops when reaching the precision.
asstring | ReactComponent'div'The underlying component that AutoTextSize will use.

Vanilla function

Zero dependencies.

import { autoTextSize } from 'auto-text-size'

// autoTextSize runs the returned function directly and
// re-runs it when the container element resize.
const updateTextSize = autoTextSize(options)

// All invocations are throttled for performance. Manually
// call this if the content changes and needs to re-adjust.
updateTextSize()

// Disconnect the resize observer when done.
updateTextSize.disconnect()

One-off:

import { updateTextSize } from 'auto-text-size'

updateTextSize(options)

autoTextSize options

NameTypeDefaultDescription
innerElHTMLElement The inner element to be auto sized.
containerElHTMLElement The container element defines the dimensions.
multilinebooleanfalseAllow text to wrap to multiple lines.
minFontSizePxnumber8The minimum font size to be used.
maxFontSizePxnumber160The maximum font size to be used.
fontSizePrecisionPxnumber0.1The algorithm stops when reaching the precision.

Details

  • The single-line algorithm predicts how the browser will render text in a different font size and iterates until converging within fontSizePrecisionPx (usually 1-2 iterations).
  • The multi-line algorithm performs a binary search among the possible font sizes until converging within fontSizePrecisionPx (usually ~10 iterations). There is no reliable way of predicting how the browser will render text in a different font size when multi-line text wrap is at play.
  • Performance. Each iteration has a performance hit since it triggers a layout reflow. Multiple mesures are taken to minimize the performance impact. As few iterations as possible are executed, throttling is performed using requestAnimationFrame and ResizeObserver is used to recompute text size only when needed.
  • No overflow. After converging, the algorithm runs a second loop to ensure that no overflow occurs. Underflow is preferred since it doesn't look visually broken like overflow does. Some browsers (eg. Safari) are not good with sub-pixel font sizing, making it so that significant visual overflow can occur unless we adjust for it.
  • Font size is used rather than the scale() CSS function since it is simple and works very well. The the scale() function wouldn't support multi-line text wrap and it tends to make text blurry in some browsers.

Developing

When developing one typically wants to see the output in the example application without having to publish and reinstall. This is achieved by linking the local package into the example app.

Because of issues with yarn link, Yalc is used instead. A linking approach is preferred over yarn workspaces since we want to use the package as it would appear in the real world.

npm i yalc -g
yarn
yarn watch

# Other terminal
cd example
yarn
yalc link auto-text-size
yarn dev

Yalc and HMR

Using yalc link (or yalc add--link) makes it so that Next.js HMR detects updates instantly.

Publishing

# Update version number
yarn clean && yarn build
npm publish

Author: sanalabs
Source code: https://github.com/sanalabs/auto-text-size
License: MIT license
#react #javascript #typescript
 

Cómo construir un detector de noticias falsas en Python

Detección de noticias falsas en Python

Explorar el conjunto de datos de noticias falsas, realizar análisis de datos como nubes de palabras y ngramas, y ajustar el transformador BERT para construir un detector de noticias falsas en Python usando la biblioteca de transformadores.

Las noticias falsas son la transmisión intencional de afirmaciones falsas o engañosas como noticias, donde las declaraciones son deliberadamente engañosas.

Los periódicos, tabloides y revistas han sido reemplazados por plataformas de noticias digitales, blogs, fuentes de redes sociales y una plétora de aplicaciones de noticias móviles. Las organizaciones de noticias se beneficiaron del mayor uso de las redes sociales y las plataformas móviles al proporcionar a los suscriptores información actualizada al minuto.

Los consumidores ahora tienen acceso instantáneo a las últimas noticias. Estas plataformas de medios digitales han aumentado en importancia debido a su fácil conexión con el resto del mundo y permiten a los usuarios discutir y compartir ideas y debatir temas como la democracia, la educación, la salud, la investigación y la historia. Las noticias falsas en las plataformas digitales son cada vez más populares y se utilizan con fines de lucro, como ganancias políticas y financieras.

¿Qué tan grande es este problema?

Debido a que Internet, las redes sociales y las plataformas digitales son ampliamente utilizadas, cualquiera puede propagar información inexacta y sesgada. Es casi imposible evitar la difusión de noticias falsas. Hay un aumento tremendo en la distribución de noticias falsas, que no se restringe a un sector como la política sino que incluye deportes, salud, historia, entretenimiento y ciencia e investigación.

La solución

Es vital reconocer y diferenciar entre noticias falsas y veraces. Un método es hacer que un experto decida y verifique cada pieza de información, pero esto lleva tiempo y requiere experiencia que no se puede compartir. En segundo lugar, podemos utilizar herramientas de aprendizaje automático e inteligencia artificial para automatizar la identificación de noticias falsas.

La información de noticias en línea incluye varios datos en formato no estructurado (como documentos, videos y audio), pero aquí nos concentraremos en las noticias en formato de texto. Con el progreso del aprendizaje automático y el procesamiento del lenguaje natural , ahora podemos reconocer el carácter engañoso y falso de un artículo o declaración.

Se están realizando varios estudios y experimentos para detectar noticias falsas en todos los medios.

Nuestro objetivo principal de este tutorial es:

  • Explore y analice el conjunto de datos de noticias falsas.
  • Cree un clasificador que pueda distinguir noticias falsas con la mayor precisión posible.

Aquí está la tabla de contenido:

  • Introducción
  • ¿Qué tan grande es este problema?
  • La solución
  • Exploración de datos
    • Distribución de Clases
  • Limpieza de datos para análisis
  • Análisis exploratorio de datos
    • Nube de una sola palabra
    • Bigrama más frecuente (combinación de dos palabras)
    • Trigrama más frecuente (combinación de tres palabras)
  • Creación de un clasificador mediante el ajuste fino de BERT
    • Preparación de datos
    • Tokenización del conjunto de datos
    • Cargar y ajustar el modelo
    • Evaluación del modelo
  • Apéndice: Creación de un archivo de envío para Kaggle
  • Conclusión

Exploración de datos

En este trabajo, utilizamos el conjunto de datos de noticias falsas de Kaggle para clasificar artículos de noticias no confiables como noticias falsas. Disponemos de un completo dataset de entrenamiento que contiene las siguientes características:

  • id: identificación única para un artículo de noticias
  • title: título de un artículo periodístico
  • author: autor de la noticia
  • text: texto del artículo; podría estar incompleto
  • label: una etiqueta que marca el artículo como potencialmente no confiable denotado por 1 (poco confiable o falso) o 0 (confiable).

Es un problema de clasificación binaria en el que debemos predecir si una determinada noticia es fiable o no.

Si tiene una cuenta de Kaggle, simplemente puede descargar el conjunto de datos del sitio web y extraer el archivo ZIP.

También cargué el conjunto de datos en Google Drive y puede obtenerlo aquí o usar la gdownbiblioteca para descargarlo automáticamente en Google Colab o cuadernos de Jupyter:

$ pip install gdown
# download from Google Drive
$ gdown "https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t"
Downloading...
From: https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t
To: /content/fake-news.zip
100% 48.7M/48.7M [00:00<00:00, 74.6MB/s]

Descomprimiendo los archivos:

$ unzip fake-news.zip

Aparecerán tres archivos en el directorio de trabajo actual: train.csv, test.csvy submit.csv, que usaremos train.csven la mayor parte del tutorial.

Instalando las dependencias requeridas:

$ pip install transformers nltk pandas numpy matplotlib seaborn wordcloud

Nota: si se encuentra en un entorno local, asegúrese de instalar PyTorch para GPU, diríjase a esta página para una instalación adecuada.

Importemos las bibliotecas esenciales para el análisis:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

El corpus y los módulos NLTK deben instalarse mediante el descargador NLTK estándar:

import nltk
nltk.download('stopwords')
nltk.download('wordnet')

El conjunto de datos de noticias falsas comprende títulos y textos de artículos originales y ficticios de varios autores. Importemos nuestro conjunto de datos:

# load the dataset
news_d = pd.read_csv("train.csv")
print("Shape of News data:", news_d.shape)
print("News data columns", news_d.columns)

Producción:

 Shape of News data: (20800, 5)
 News data columns Index(['id', 'title', 'author', 'text', 'label'], dtype='object')

Así es como se ve el conjunto de datos:

# by using df.head(), we can immediately familiarize ourselves with the dataset. 
news_d.head()

Producción:

id	title	author	text	label
0	0	House Dem Aide: We Didn’t Even See Comey’s Let...	Darrell Lucus	House Dem Aide: We Didn’t Even See Comey’s Let...	1
1	1	FLYNN: Hillary Clinton, Big Woman on Campus - ...	Daniel J. Flynn	Ever get the feeling your life circles the rou...	0
2	2	Why the Truth Might Get You Fired	Consortiumnews.com	Why the Truth Might Get You Fired October 29, ...	1
3	3	15 Civilians Killed In Single US Airstrike Hav...	Jessica Purkiss	Videos 15 Civilians Killed In Single US Airstr...	1
4	4	Iranian woman jailed for fictional unpublished...	Howard Portnoy	Print \nAn Iranian woman has been sentenced to...	1

Tenemos 20.800 filas, que tienen cinco columnas. Veamos algunas estadísticas de la textcolumna:

#Text Word startistics: min.mean, max and interquartile range

txt_length = news_d.text.str.split().str.len()
txt_length.describe()

Producción:

count    20761.000000
mean       760.308126
std        869.525988
min          0.000000
25%        269.000000
50%        556.000000
75%       1052.000000
max      24234.000000
Name: text, dtype: float64

Estadísticas de la titlecolumna:

#Title statistics 

title_length = news_d.title.str.split().str.len()
title_length.describe()

Producción:

count    20242.000000
mean        12.420709
std          4.098735
min          1.000000
25%         10.000000
50%         13.000000
75%         15.000000
max         72.000000
Name: title, dtype: float64

Las estadísticas para los conjuntos de entrenamiento y prueba son las siguientes:

  • El textatributo tiene un conteo de palabras más alto con un promedio de 760 palabras y un 75% con más de 1000 palabras.
  • El titleatributo es una declaración breve con un promedio de 12 palabras, y el 75% de ellas tiene alrededor de 15 palabras.

Nuestro experimento sería con el texto y el título juntos.

Distribución de Clases

Parcelas de conteo para ambas etiquetas:

sns.countplot(x="label", data=news_d);
print("1: Unreliable")
print("0: Reliable")
print("Distribution of labels:")
print(news_d.label.value_counts());

Producción:

1: Unreliable
0: Reliable
Distribution of labels:
1    10413
0    10387
Name: label, dtype: int64

Distribución de etiquetas

print(round(news_d.label.value_counts(normalize=True),2)*100);

Producción:

1    50.0
0    50.0
Name: label, dtype: float64

La cantidad de artículos no confiables (falsos o 1) es 10413, mientras que la cantidad de artículos confiables (confiables o 0) es 10387. Casi el 50% de los artículos son falsos. Por lo tanto, la métrica de precisión medirá qué tan bien funciona nuestro modelo al construir un clasificador.

Limpieza de datos para análisis

En esta sección, limpiaremos nuestro conjunto de datos para hacer algunos análisis:

  • Elimina las filas y columnas que no uses.
  • Realizar imputación de valor nulo.
  • Eliminar caracteres especiales.
  • Elimina las palabras vacías.
# Constants that are used to sanitize the datasets 

column_n = ['id', 'title', 'author', 'text', 'label']
remove_c = ['id','author']
categorical_features = []
target_col = ['label']
text_f = ['title', 'text']
# Clean Datasets
import nltk
from nltk.corpus import stopwords
import re
from nltk.stem.porter import PorterStemmer
from collections import Counter

ps = PorterStemmer()
wnl = nltk.stem.WordNetLemmatizer()

stop_words = stopwords.words('english')
stopwords_dict = Counter(stop_words)

# Removed unused clumns
def remove_unused_c(df,column_n=remove_c):
    df = df.drop(column_n,axis=1)
    return df

# Impute null values with None
def null_process(feature_df):
    for col in text_f:
        feature_df.loc[feature_df[col].isnull(), col] = "None"
    return feature_df

def clean_dataset(df):
    # remove unused column
    df = remove_unused_c(df)
    #impute null values
    df = null_process(df)
    return df

# Cleaning text from unused characters
def clean_text(text):
    text = str(text).replace(r'http[\w:/\.]+', ' ')  # removing urls
    text = str(text).replace(r'[^\.\w\s]', ' ')  # remove everything but characters and punctuation
    text = str(text).replace('[^a-zA-Z]', ' ')
    text = str(text).replace(r'\s\s+', ' ')
    text = text.lower().strip()
    #text = ' '.join(text)    
    return text

## Nltk Preprocessing include:
# Stop words, Stemming and Lemmetization
# For our project we use only Stop word removal
def nltk_preprocess(text):
    text = clean_text(text)
    wordlist = re.sub(r'[^\w\s]', '', text).split()
    #text = ' '.join([word for word in wordlist if word not in stopwords_dict])
    #text = [ps.stem(word) for word in wordlist if not word in stopwords_dict]
    text = ' '.join([wnl.lemmatize(word) for word in wordlist if word not in stopwords_dict])
    return  text

En el bloque de código de arriba:

  • Hemos importado NLTK, que es una plataforma famosa para desarrollar aplicaciones de Python que interactúan con el lenguaje humano. A continuación, importamos repara expresiones regulares.
  • Importamos palabras vacías desde nltk.corpus. Cuando trabajamos con palabras, particularmente cuando consideramos la semántica, a veces necesitamos eliminar palabras comunes que no agregan ningún significado significativo a una declaración, como "but", "can", "we", etc.
  • PorterStemmerse utiliza para realizar palabras derivadas con NLTK. Los lematizadores despojan a las palabras de sus afijos morfológicos, dejando únicamente la raíz de la palabra.
  • Importamos WordNetLemmatizer()de la biblioteca NLTK para la lematización. La lematización es mucho más eficaz que la derivación . Va más allá de la reducción de palabras y evalúa todo el léxico de un idioma para aplicar el análisis morfológico a las palabras, con el objetivo de eliminar los extremos flexivos y devolver la forma base o de diccionario de una palabra, conocida como lema.
  • stopwords.words('english')permítanos ver la lista de todas las palabras vacías en inglés admitidas por NLTK.
  • remove_unused_c()La función se utiliza para eliminar las columnas no utilizadas.
  • Imputamos valores nulos con Noneel uso de la null_process()función.
  • Dentro de la función clean_dataset(), llamamos remove_unused_c()y null_process()funciones. Esta función es responsable de la limpieza de datos.
  • Para limpiar texto de caracteres no utilizados, hemos creado la clean_text()función.
  • Para el preprocesamiento, solo utilizaremos la eliminación de palabras vacías. Creamos la nltk_preprocess()función para ese propósito.

Preprocesando el texty title:

# Perform data cleaning on train and test dataset by calling clean_dataset function
df = clean_dataset(news_d)
# apply preprocessing on text through apply method by calling the function nltk_preprocess
df["text"] = df.text.apply(nltk_preprocess)
# apply preprocessing on title through apply method by calling the function nltk_preprocess
df["title"] = df.title.apply(nltk_preprocess)
# Dataset after cleaning and preprocessing step
df.head()

Producción:

title	text	label
0	house dem aide didnt even see comeys letter ja...	house dem aide didnt even see comeys letter ja...	1
1	flynn hillary clinton big woman campus breitbart	ever get feeling life circle roundabout rather...	0
2	truth might get fired	truth might get fired october 29 2016 tension ...	1
3	15 civilian killed single u airstrike identified	video 15 civilian killed single u airstrike id...	1
4	iranian woman jailed fictional unpublished sto...	print iranian woman sentenced six year prison ...	1

Análisis exploratorio de datos

En esta sección realizaremos:

  • Análisis Univariante : Es un análisis estadístico del texto. Usaremos la nube de palabras para ese propósito. Una nube de palabras es un enfoque de visualización de datos de texto donde el término más común se presenta en el tamaño de fuente más considerable.
  • Análisis bivariado : Bigram y Trigram se utilizarán aquí. Según Wikipedia: " un n-grama es una secuencia contigua de n elementos de una muestra determinada de texto o habla. Según la aplicación, los elementos pueden ser fonemas, sílabas, letras, palabras o pares de bases. Los n-gramas normalmente se recopilan de un corpus de texto o de voz".

Nube de una sola palabra

Las palabras más frecuentes aparecen en negrita y de mayor tamaño en una nube de palabras. Esta sección creará una nube de palabras para todas las palabras del conjunto de datos.

Se usará la función de la biblioteca de WordCloudwordcloud() y generate()se utilizará para generar la imagen de la nube de palabras:

from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt

# initialize the word cloud
wordcloud = WordCloud( background_color='black', width=800, height=600)
# generate the word cloud by passing the corpus
text_cloud = wordcloud.generate(' '.join(df['text']))
# plotting the word cloud
plt.figure(figsize=(20,30))
plt.imshow(text_cloud)
plt.axis('off')
plt.show()

Producción:

WordCloud para todos los datos de noticias falsas

Nube de palabras solo para noticias confiables:

true_n = ' '.join(df[df['label']==0]['text']) 
wc = wordcloud.generate(true_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

Producción:

Nube de palabras para noticias confiables

Nube de palabras solo para noticias falsas:

fake_n = ' '.join(df[df['label']==1]['text'])
wc= wordcloud.generate(fake_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

Producción:

Nube de palabras para noticias falsas

Bigrama más frecuente (combinación de dos palabras)

Un N-grama es una secuencia de letras o palabras. Un unigrama de carácter se compone de un solo carácter, mientras que un bigrama comprende una serie de dos caracteres. De manera similar, los N-gramas de palabras se componen de una serie de n palabras. La palabra "unidos" es un 1 gramo (unigrama). La combinación de las palabras "estado unido" es de 2 gramos (bigrama), "ciudad de nueva york" es de 3 gramos.

Grafiquemos el bigrama más común en las noticias confiables:

def plot_top_ngrams(corpus, title, ylabel, xlabel="Number of Occurences", n=2):
  """Utility function to plot top n-grams"""
  true_b = (pd.Series(nltk.ngrams(corpus.split(), n)).value_counts())[:20]
  true_b.sort_values().plot.barh(color='blue', width=.9, figsize=(12, 8))
  plt.title(title)
  plt.ylabel(ylabel)
  plt.xlabel(xlabel)
  plt.show()
plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Bigrams', "Bigram", n=2)

Top bigramas sobre noticias falsas

El bigrama más común en las noticias falsas:

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Bigrams', "Bigram", n=2)

Top bigramas sobre noticias falsas

Trigrama más frecuente (combinación de tres palabras)

El trigrama más común en noticias confiables:

plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Trigrams', "Trigrams", n=3)

El trigrama más común en las noticias falsas

Para noticias falsas ahora:

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Trigrams', "Trigrams", n=3)

Trigramas más comunes en Fake news

Los gráficos anteriores nos dan algunas ideas sobre cómo se ven ambas clases. En la siguiente sección, usaremos la biblioteca de transformadores para construir un detector de noticias falsas.

Creación de un clasificador mediante el ajuste fino de BERT

Esta sección tomará código ampliamente del tutorial BERT de ajuste fino para hacer un clasificador de noticias falsas utilizando la biblioteca de transformadores. Entonces, para obtener información más detallada, puede dirigirse al tutorial original .

Si no instaló transformadores, debe:

$ pip install transformers

Importemos las bibliotecas necesarias:

import torch
from transformers.file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
from transformers import BertTokenizerFast, BertForSequenceClassification
from transformers import Trainer, TrainingArguments
import numpy as np
from sklearn.model_selection import train_test_split

import random

Queremos que nuestros resultados sean reproducibles incluso si reiniciamos nuestro entorno:

def set_seed(seed: int):
    """
    Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch`` and/or ``tf`` (if
    installed).

    Args:
        seed (:obj:`int`): The seed to set.
    """
    random.seed(seed)
    np.random.seed(seed)
    if is_torch_available():
        torch.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)
        # ^^ safe to call this function even if cuda is not available
    if is_tf_available():
        import tensorflow as tf

        tf.random.set_seed(seed)

set_seed(1)

El modelo que vamos a utilizar es el bert-base-uncased:

# the model we gonna train, base uncased BERT
# check text classification models here: https://huggingface.co/models?filter=text-classification
model_name = "bert-base-uncased"
# max sequence length for each document/sentence sample
max_length = 512

Cargando el tokenizador:

# load the tokenizer
tokenizer = BertTokenizerFast.from_pretrained(model_name, do_lower_case=True)

Preparación de datos

Limpiemos ahora los NaNvalores de las columnas text, authory :title

news_df = news_d[news_d['text'].notna()]
news_df = news_df[news_df["author"].notna()]
news_df = news_df[news_df["title"].notna()]

A continuación, crear una función que tome el conjunto de datos como un marco de datos de Pandas y devuelva las divisiones de entrenamiento/validación de textos y etiquetas como listas:

def prepare_data(df, test_size=0.2, include_title=True, include_author=True):
  texts = []
  labels = []
  for i in range(len(df)):
    text = df["text"].iloc[i]
    label = df["label"].iloc[i]
    if include_title:
      text = df["title"].iloc[i] + " - " + text
    if include_author:
      text = df["author"].iloc[i] + " : " + text
    if text and label in [0, 1]:
      texts.append(text)
      labels.append(label)
  return train_test_split(texts, labels, test_size=test_size)

train_texts, valid_texts, train_labels, valid_labels = prepare_data(news_df)

La función anterior toma el conjunto de datos en un tipo de marco de datos y los devuelve como listas divididas en conjuntos de entrenamiento y validación. Establecer include_titleen Truesignifica que agregamos la titlecolumna a la textque vamos a usar para el entrenamiento, establecer include_authoren Truesignifica que también agregamos authoral texto.

Asegurémonos de que las etiquetas y los textos tengan la misma longitud:

print(len(train_texts), len(train_labels))
print(len(valid_texts), len(valid_labels))

Producción:

14628 14628
3657 3657

Tokenización del conjunto de datos

Usemos el tokenizador BERT para tokenizar nuestro conjunto de datos:

# tokenize the dataset, truncate when passed `max_length`, 
# and pad with 0's when less than `max_length`
train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=max_length)
valid_encodings = tokenizer(valid_texts, truncation=True, padding=True, max_length=max_length)

Convertir las codificaciones en un conjunto de datos de PyTorch:

class NewsGroupsDataset(torch.utils.data.Dataset):
    def __init__(self, encodings, labels):
        self.encodings = encodings
        self.labels = labels

    def __getitem__(self, idx):
        item = {k: torch.tensor(v[idx]) for k, v in self.encodings.items()}
        item["labels"] = torch.tensor([self.labels[idx]])
        return item

    def __len__(self):
        return len(self.labels)

# convert our tokenized data into a torch Dataset
train_dataset = NewsGroupsDataset(train_encodings, train_labels)
valid_dataset = NewsGroupsDataset(valid_encodings, valid_labels)

Cargar y ajustar el modelo

Usaremos BertForSequenceClassificationpara cargar nuestro modelo de transformador BERT:

# load the model
model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2)

Establecemos num_labelsa 2 ya que es una clasificación binaria. A continuación, la función es una devolución de llamada para calcular la precisión en cada paso de validación:

from sklearn.metrics import accuracy_score

def compute_metrics(pred):
  labels = pred.label_ids
  preds = pred.predictions.argmax(-1)
  # calculate accuracy using sklearn's function
  acc = accuracy_score(labels, preds)
  return {
      'accuracy': acc,
  }

Vamos a inicializar los parámetros de entrenamiento:

training_args = TrainingArguments(
    output_dir='./results',          # output directory
    num_train_epochs=1,              # total number of training epochs
    per_device_train_batch_size=10,  # batch size per device during training
    per_device_eval_batch_size=20,   # batch size for evaluation
    warmup_steps=100,                # number of warmup steps for learning rate scheduler
    logging_dir='./logs',            # directory for storing logs
    load_best_model_at_end=True,     # load the best model when finished training (default metric is loss)
    # but you can specify `metric_for_best_model` argument to change to accuracy or other metric
    logging_steps=200,               # log & save weights each logging_steps
    save_steps=200,
    evaluation_strategy="steps",     # evaluate each `logging_steps`
)

Configuré el valor per_device_train_batch_sizeen 10, pero debe configurarlo tan alto como su GPU pueda caber. Establecer el logging_stepsy save_stepsen 200, lo que significa que vamos a realizar una evaluación y guardar los pesos del modelo en cada 200 pasos de entrenamiento.

Puede consultar  esta página  para obtener información más detallada sobre los parámetros de entrenamiento disponibles.

Instanciamos el entrenador:

trainer = Trainer(
    model=model,                         # the instantiated Transformers model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=valid_dataset,          # evaluation dataset
    compute_metrics=compute_metrics,     # the callback that computes metrics of interest
)

Entrenamiento del modelo:

# train the model
trainer.train()

El entrenamiento tarda unas horas en finalizar, dependiendo de su GPU. Si está en la versión gratuita de Colab, debería tomar una hora con NVIDIA Tesla K80. Aquí está la salida:

***** Running training *****
  Num examples = 14628
  Num Epochs = 1
  Instantaneous batch size per device = 10
  Total train batch size (w. parallel, distributed & accumulation) = 10
  Gradient Accumulation steps = 1
  Total optimization steps = 1463
 [1463/1463 41:07, Epoch 1/1]
Step	Training Loss	Validation Loss	Accuracy
200		0.250800		0.100533		0.983867
400		0.027600		0.043009		0.993437
600		0.023400		0.017812		0.997539
800		0.014900		0.030269		0.994258
1000	0.022400		0.012961		0.998086
1200	0.009800		0.010561		0.998633
1400	0.007700		0.010300		0.998633
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-200
Configuration saved in ./results/checkpoint-200/config.json
Model weights saved in ./results/checkpoint-200/pytorch_model.bin
<SNIPPED>
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-1400
Configuration saved in ./results/checkpoint-1400/config.json
Model weights saved in ./results/checkpoint-1400/pytorch_model.bin

Training completed. Do not forget to share your model on huggingface.co/models =)

Loading best model from ./results/checkpoint-1400 (score: 0.010299865156412125).
TrainOutput(global_step=1463, training_loss=0.04888018785440506, metrics={'train_runtime': 2469.1722, 'train_samples_per_second': 5.924, 'train_steps_per_second': 0.593, 'total_flos': 3848788517806080.0, 'train_loss': 0.04888018785440506, 'epoch': 1.0})

Evaluación del modelo

Dado que load_best_model_at_endestá configurado en True, los mejores pesos se cargarán cuando se complete el entrenamiento. Vamos a evaluarlo con nuestro conjunto de validación:

# evaluate the current model after training
trainer.evaluate()

Producción:

***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
 [183/183 02:11]
{'epoch': 1.0,
 'eval_accuracy': 0.998632759092152,
 'eval_loss': 0.010299865156412125,
 'eval_runtime': 132.0374,
 'eval_samples_per_second': 27.697,
 'eval_steps_per_second': 1.386}

Guardando el modelo y el tokenizador:

# saving the fine tuned model & tokenizer
model_path = "fake-news-bert-base-uncased"
model.save_pretrained(model_path)
tokenizer.save_pretrained(model_path)

Aparecerá una nueva carpeta que contiene la configuración del modelo y los pesos después de ejecutar la celda anterior. Si desea realizar una predicción, simplemente use el from_pretrained()método que usamos cuando cargamos el modelo, y ya está listo.

A continuación, hagamos una función que acepte el texto del artículo como argumento y devuelva si es falso o no:

def get_prediction(text, convert_to_label=False):
    # prepare our text into tokenized sequence
    inputs = tokenizer(text, padding=True, truncation=True, max_length=max_length, return_tensors="pt").to("cuda")
    # perform inference to our model
    outputs = model(**inputs)
    # get output probabilities by doing softmax
    probs = outputs[0].softmax(1)
    # executing argmax function to get the candidate label
    d = {
        0: "reliable",
        1: "fake"
    }
    if convert_to_label:
      return d[int(probs.argmax())]
    else:
      return int(probs.argmax())

Tomé un ejemplo de test.csvque el modelo nunca vio para realizar inferencias, lo verifiqué y es un artículo real de The New York Times:

real_news = """
Tim Tebow Will Attempt Another Comeback, This Time in Baseball - The New York Times",Daniel Victor,"If at first you don’t succeed, try a different sport. Tim Tebow, who was a Heisman   quarterback at the University of Florida but was unable to hold an N. F. L. job, is pursuing a career in Major League Baseball. <SNIPPED>
"""

El texto original está en el entorno de Colab si desea copiarlo, ya que es un artículo completo. Vamos a pasarlo al modelo y ver los resultados:

get_prediction(real_news, convert_to_label=True)

Producción:

reliable

Apéndice: Creación de un archivo de envío para Kaggle

En esta sección, predeciremos todos los artículos en el test.csvpara crear un archivo de envío para ver nuestra precisión en la prueba establecida en la competencia Kaggle :

# read the test set
test_df = pd.read_csv("test.csv")
# make a copy of the testing set
new_df = test_df.copy()
# add a new column that contains the author, title and article content
new_df["new_text"] = new_df["author"].astype(str) + " : " + new_df["title"].astype(str) + " - " + new_df["text"].astype(str)
# get the prediction of all the test set
new_df["label"] = new_df["new_text"].apply(get_prediction)
# make the submission file
final_df = new_df[["id", "label"]]
final_df.to_csv("submit_final.csv", index=False)

Después de concatenar el autor, el título y el texto del artículo, pasamos la get_prediction()función a la nueva columna para llenar la labelcolumna, luego usamos to_csv()el método para crear el archivo de envío para Kaggle. Aquí está mi puntaje de presentación:

Puntuación de envío

Obtuvimos una precisión del 99,78 % y del 100 % en las tablas de clasificación privadas y públicas. ¡Eso es genial!

Conclusión

Muy bien, hemos terminado con el tutorial. Puede consultar esta página para ver varios parámetros de entrenamiento que puede modificar.

Si tiene un conjunto de datos de noticias falsas personalizado para ajustarlo, simplemente tiene que pasar una lista de muestras al tokenizador como lo hicimos nosotros, no cambiará ningún otro código después de eso.

Consulta el código completo aquí , o el entorno de Colab aquí .

Elian  Harber

Elian Harber

1641430440

Bokeh Plotting Backend for Pandas and GeoPandas

Pandas-Bokeh provides a Bokeh plotting backend for Pandas, GeoPandas and Pyspark DataFrames, similar to the already existing Visualization feature of Pandas. Importing the library adds a complementary plotting method plot_bokeh() on DataFrames and Series.

With Pandas-Bokeh, creating stunning, interactive, HTML-based visualization is as easy as calling:

df.plot_bokeh()

Pandas-Bokeh also provides native support as a Pandas Plotting backend for Pandas >= 0.25. When Pandas-Bokeh is installed, switchting the default Pandas plotting backend to Bokeh can be done via:

pd.set_option('plotting.backend', 'pandas_bokeh')

More details about the new Pandas backend can be found below.


Interactive Documentation

Please visit:

https://patrikhlobil.github.io/Pandas-Bokeh/

for an interactive version of the documentation below, where you can play with the dynamic Bokeh plots.


For more information have a look at the Examples below or at notebooks on the Github Repository of this project.

Startimage


 

Installation

You can install Pandas-Bokeh from PyPI via pip

pip install pandas-bokeh

or conda:

conda install -c patrikhlobil pandas-bokeh

With the current release 0.5.5, Pandas-Bokeh officially supports Python 3.6 and newer. For more details, see Release Notes.

How To Use

Classical Use

The Pandas-Bokeh library should be imported after Pandas, GeoPandas and/or Pyspark. After the import, one should define the plotting output, which can be:

pandas_bokeh.output_notebook(): Embeds the Plots in the cell outputs of the notebook. Ideal when working in Jupyter Notebooks.

pandas_bokeh.output_file(filename): Exports the plot to the provided filename as an HTML.

For more details about the plotting outputs, see the reference here or the Bokeh documentation.

Notebook output (see also bokeh.io.output_notebook)

import pandas as pd import pandas_bokeh pandas_bokeh.output_notebook()

File output to "Interactive Plot.html" (see also bokeh.io.output_file)

import pandas as pd import pandas_bokeh pandas_bokeh.output_file("Interactive Plot.html")

Pandas-Bokeh as native Pandas plotting backend

For pandas >= 0.25, a plotting backend switch is natively supported. It can be achievied by calling:

import pandas as pd
pd.set_option('plotting.backend', 'pandas_bokeh')

Now, the plotting API is accessible for a Pandas DataFrame via:

df.plot(...)

All additional functionalities of Pandas-Bokeh are then accessible at pd.plotting. So, setting the output to notebook is:

pd.plotting.output_notebook()

or calling the grid layout functionality:

pd.plotting.plot_grid(...)

Note: Backwards compatibility is kept since there will still be the df.plot_bokeh(...) methods for a DataFrame.


Plot types

Supported plottypes are at the moment:

Also, check out the complementary chapter Outputs, Formatting & Layouts about:


Lineplot

Basic Lineplot

This simple lineplot in Pandas-Bokeh already contains various interactive elements:

  • a pannable and zoomable (zoom in plotarea and zoom on axis) plot
  • by clicking on the legend elements, one can hide and show the individual lines
  • a Hovertool for the plotted lines

Consider the following simple example:

import numpy as np

np.random.seed(42)
df = pd.DataFrame({"Google": np.random.randn(1000)+0.2, 
                   "Apple": np.random.randn(1000)+0.17}, 
                   index=pd.date_range('1/1/2000', periods=1000))
df = df.cumsum()
df = df + 50
df.plot_bokeh(kind="line")       #equivalent to df.plot_bokeh.line()

ApplevsGoogle_1

Note, that similar to the regular pandas.DataFrame.plot method, there are also additional accessors to directly access the different plotting types like:

  • df.plot_bokeh(kind="line", ...)df.plot_bokeh.line(...)
  • df.plot_bokeh(kind="bar", ...)df.plot_bokeh.bar(...)
  • df.plot_bokeh(kind="hist", ...)df.plot_bokeh.hist(...)
  • ...

Advanced Lineplot

There are various optional parameters to tune the plots, for example:

kind: Which kind of plot should be produced. Currently supported are: "line", "point", "scatter", "bar" and "histogram". In the near future many more will be implemented as horizontal barplot, boxplots, pie-charts, etc.

x: Name of the column to use for the horizontal x-axis. If the x parameter is not specified, the index is used for the x-values of the plot. Alternative, also an array of values can be passed that has the same number of elements as the DataFrame.

y: Name of column or list of names of columns to use for the vertical y-axis.

figsize: Choose width & height of the plot

title: Sets title of the plot

xlim/ylim: Set visibler range of plot for x- and y-axis (also works for datetime x-axis)

xlabel/ylabel: Set x- and y-labels

logx/logy: Set log-scale on x-/y-axis

xticks/yticks: Explicitly set the ticks on the axes

color: Defines a single color for a plot.

colormap: Can be used to specify multiple colors to plot. Can be either a list of colors or the name of a Bokeh color palette

hovertool: If True a Hovertool is active, else if False no Hovertool is drawn.

hovertool_string: If specified, this string will be used for the hovertool (@{column} will be replaced by the value of the column for the element the mouse hovers over, see also Bokeh documentation and here)

toolbar_location: Specify the position of the toolbar location (None, "above", "below", "left" or "right"). Default: "right"

zooming: Enables/Disables zooming. Default: True

panning: Enables/Disables panning. Default: True

fontsize_label/fontsize_ticks/fontsize_title/fontsize_legend: Set fontsize of labels, ticks, title or legend (int or string of form "15pt")

rangetool Enables a range tool scroller. Default False

kwargs**: Optional keyword arguments of bokeh.plotting.figure.line

Try them out to get a feeling for the effects. Let us consider now:

df.plot_bokeh.line(
    figsize=(800, 450),
    y="Apple",
    title="Apple vs Google",
    xlabel="Date",
    ylabel="Stock price [$]",
    yticks=[0, 100, 200, 300, 400],
    ylim=(0, 400),
    toolbar_location=None,
    colormap=["red", "blue"],
    hovertool_string=r"""<img
                        src='https://upload.wikimedia.org/wikipedia/commons/thumb/f/fa/Apple_logo_black.svg/170px-Apple_logo_black.svg.png' 
                        height="42" alt="@imgs" width="42"
                        style="float: left; margin: 0px 15px 15px 0px;"
                        border="2"></img> Apple 
                        
                        <h4> Stock Price: </h4> @{Apple}""",
    panning=False,
    zooming=False)

ApplevsGoogle_2

Lineplot with data points

For lineplots, as for many other plot-kinds, there are some special keyword arguments that only work for this plotting type. For lineplots, these are:

plot_data_points: Plot also the data points on the lines

plot_data_points_size: Determines the size of the data points

marker: Defines the point type (Default: "circle"). Possible values are: 'circle', 'square', 'triangle', 'asterisk', 'circle_x', 'square_x', 'inverted_triangle', 'x', 'circle_cross', 'square_cross', 'diamond', 'cross'

kwargs**: Optional keyword arguments of bokeh.plotting.figure.line```

Let us use this information to have another version of the same plot:

df.plot_bokeh.line(
    figsize=(800, 450),
    title="Apple vs Google",
    xlabel="Date",
    ylabel="Stock price [$]",
    yticks=[0, 100, 200, 300, 400],
    ylim=(100, 200),
    xlim=("2001-01-01", "2001-02-01"),
    colormap=["red", "blue"],
    plot_data_points=True,
    plot_data_points_size=10,
    marker="asterisk")

ApplevsGoogle_3

Lineplot with rangetool

ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list('ABCD'))
df = df.cumsum()

df.plot_bokeh(rangetool=True)

rangetool

Pointplot

If you just wish to draw the date points for curves, the pointplot option is the right choice. It also accepts the kwargs of bokeh.plotting.figure.scatter like marker or size:

import numpy as np

x = np.arange(-3, 3, 0.1)
y2 = x**2
y3 = x**3
df = pd.DataFrame({"x": x, "Parabula": y2, "Cube": y3})
df.plot_bokeh.point(
    x="x",
    xticks=range(-3, 4),
    size=5,
    colormap=["#009933", "#ff3399"],
    title="Pointplot (Parabula vs. Cube)",
    marker="x")

Pointplot

Stepplot

With a similar API as the line- & pointplots, one can generate a stepplot. Additional keyword arguments for this plot type are passes to bokeh.plotting.figure.step, e.g. mode (before, after, center), see the following example

import numpy as np

x = np.arange(-3, 3, 1)
y2 = x**2
y3 = x**3
df = pd.DataFrame({"x": x, "Parabula": y2, "Cube": y3})
df.plot_bokeh.step(
    x="x",
    xticks=range(-1, 1),
    colormap=["#009933", "#ff3399"],
    title="Pointplot (Parabula vs. Cube)",
    figsize=(800,300),
    fontsize_title=30,
    fontsize_label=25,
    fontsize_ticks=15,
    fontsize_legend=5,
    )

df.plot_bokeh.step(
    x="x",
    xticks=range(-1, 1),
    colormap=["#009933", "#ff3399"],
    title="Pointplot (Parabula vs. Cube)",
    mode="after",
    figsize=(800,300)
    )

Stepplot

Note that the step-plot API of Bokeh does so far not support a hovertool functionality.

Scatterplot

A basic scatterplot can be created using the kind="scatter" option. For scatterplots, the x and y parameters have to be specified and the following optional keyword argument is allowed:

category: Determines the category column to use for coloring the scatter points

kwargs**: Optional keyword arguments of bokeh.plotting.figure.scatter

Note, that the pandas.DataFrame.plot_bokeh() method return per default a Bokeh figure, which can be embedded in Dashboard layouts with other figures and Bokeh objects (for more details about (sub)plot layouts and embedding the resulting Bokeh plots as HTML click here).

In the example below, we use the building grid layout support of Pandas-Bokeh to display both the DataFrame (using a Bokeh DataTable) and the resulting scatterplot:

# Load Iris Dataset:
df = pd.read_csv(
    r"https://raw.githubusercontent.com/PatrikHlobil/Pandas-Bokeh/master/docs/Testdata/iris/iris.csv"
)
df = df.sample(frac=1)

# Create Bokeh-Table with DataFrame:
from bokeh.models.widgets import DataTable, TableColumn
from bokeh.models import ColumnDataSource

data_table = DataTable(
    columns=[TableColumn(field=Ci, title=Ci) for Ci in df.columns],
    source=ColumnDataSource(df),
    height=300,
)

# Create Scatterplot:
p_scatter = df.plot_bokeh.scatter(
    x="petal length (cm)",
    y="sepal width (cm)",
    category="species",
    title="Iris DataSet Visualization",
    show_figure=False,
)

# Combine Table and Scatterplot via grid layout:
pandas_bokeh.plot_grid([[data_table, p_scatter]], plot_width=400, plot_height=350)

 

Scatterplot

A possible optional keyword parameters that can be passed to bokeh.plotting.figure.scatter is size. Below, we use the sepal length of the Iris data as reference for the size:

#Change one value to clearly see the effect of the size keyword
df.loc[13, "sepal length (cm)"] = 15

#Make scatterplot:
p_scatter = df.plot_bokeh.scatter(
    x="petal length (cm)",
    y="sepal width (cm)",
    category="species",
    title="Iris DataSet Visualization with Size Keyword",
    size="sepal length (cm)")

Scatterplot2

In this example you can see, that the additional dimension sepal length cannot be used to clearly differentiate between the virginica and versicolor species.

Barplot

The barplot API has no special keyword arguments, but accepts optional kwargs of bokeh.plotting.figure.vbar like alpha. It uses per default the index for the bar categories (however, also columns can be used as x-axis category using the x argument).

data = {
    'fruits':
    ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries'],
    '2015': [2, 1, 4, 3, 2, 4],
    '2016': [5, 3, 3, 2, 4, 6],
    '2017': [3, 2, 4, 4, 5, 3]
}
df = pd.DataFrame(data).set_index("fruits")

p_bar = df.plot_bokeh.bar(
    ylabel="Price per Unit [€]", 
    title="Fruit prices per Year", 
    alpha=0.6)

Barplot

Using the stacked keyword argument you also maked stacked barplots:

p_stacked_bar = df.plot_bokeh.bar(
    ylabel="Price per Unit [€]",
    title="Fruit prices per Year",
    stacked=True,
    alpha=0.6)

Barplot2

Also horizontal versions of the above barplot are supported with the keyword kind="barh" or the accessor plot_bokeh.barh. You can still specify a column of the DataFrame as the bar category via the x argument if you do not wish to use the index.

#Reset index, such that "fruits" is now a column of the DataFrame:
df.reset_index(inplace=True)

#Create horizontal bar (via kind keyword):
p_hbar = df.plot_bokeh(
    kind="barh",
    x="fruits",
    xlabel="Price per Unit [€]",
    title="Fruit prices per Year",
    alpha=0.6,
    legend = "bottom_right",
    show_figure=False)

#Create stacked horizontal bar (via barh accessor):
p_stacked_hbar = df.plot_bokeh.barh(
    x="fruits",
    stacked=True,
    xlabel="Price per Unit [€]",
    title="Fruit prices per Year",
    alpha=0.6,
    legend = "bottom_right",
    show_figure=False)

#Plot all barplot examples in a grid:
pandas_bokeh.plot_grid([[p_bar, p_stacked_bar],
                        [p_hbar, p_stacked_hbar]], 
                       plot_width=450)

Barplot3

Histogram

For drawing histograms (kind="hist"), Pandas-Bokeh has a lot of customization features. Optional keyword arguments for histogram plots are:

bins: Determines bins to use for the histogram. If bins is an int, it defines the number of equal-width bins in the given range (10, by default). If bins is a sequence, it defines the bin edges, including the rightmost edge, allowing for non-uniform bin widths. If bins is a string, it defines the method used to calculate the optimal bin width, as defined by histogram_bin_edges.

histogram_type: Either "sidebyside", "topontop" or "stacked". Default: "topontop"

stacked: Boolean that overrides the histogram_type as "stacked" if given. Default: False

kwargs**: Optional keyword arguments of bokeh.plotting.figure.quad

Below examples of the different histogram types:

import numpy as np

df_hist = pd.DataFrame({
    'a': np.random.randn(1000) + 1,
    'b': np.random.randn(1000),
    'c': np.random.randn(1000) - 1
    },
    columns=['a', 'b', 'c'])

#Top-on-Top Histogram (Default):
df_hist.plot_bokeh.hist(
    bins=np.linspace(-5, 5, 41),
    vertical_xlabel=True,
    hovertool=False,
    title="Normal distributions (Top-on-Top)",
    line_color="black")

#Side-by-Side Histogram (multiple bars share bin side-by-side) also accessible via
#kind="hist":
df_hist.plot_bokeh(
    kind="hist",
    bins=np.linspace(-5, 5, 41),
    histogram_type="sidebyside",
    vertical_xlabel=True,
    hovertool=False,
    title="Normal distributions (Side-by-Side)",
    line_color="black")

#Stacked histogram:
df_hist.plot_bokeh.hist(
    bins=np.linspace(-5, 5, 41),
    histogram_type="stacked",
    vertical_xlabel=True,
    hovertool=False,
    title="Normal distributions (Stacked)",
    line_color="black")

Histogram

Further, advanced keyword arguments for histograms are:

  • weights: A column of the DataFrame that is used as weight for the histogramm aggregation (see also numpy.histogram)
  • normed: If True, histogram values are normed to 1 (sum of histogram values=1). It is also possible to pass an integer, e.g. normed=100 would result in a histogram with percentage y-axis (sum of histogram values=100). Default: False
  • cumulative: If True, a cumulative histogram is shown. Default: False
  • show_average: If True, the average of the histogram is also shown. Default: False

Their usage is shown in these examples:

p_hist = df_hist.plot_bokeh.hist(
    y=["a", "b"],
    bins=np.arange(-4, 6.5, 0.5),
    normed=100,
    vertical_xlabel=True,
    ylabel="Share[%]",
    title="Normal distributions (normed)",
    show_average=True,
    xlim=(-4, 6),
    ylim=(0, 30),
    show_figure=False)

p_hist_cum = df_hist.plot_bokeh.hist(
    y=["a", "b"],
    bins=np.arange(-4, 6.5, 0.5),
    normed=100,
    cumulative=True,
    vertical_xlabel=True,
    ylabel="Share[%]",
    title="Normal distributions (normed & cumulative)",
    show_figure=False)

pandas_bokeh.plot_grid([[p_hist, p_hist_cum]], plot_width=450, plot_height=300)

Histogram2


 

Areaplot

Areaplot (kind="area") can be either drawn on top of each other or stacked. The important parameters are:

stacked: If True, the areaplots are stacked. If False, plots are drawn on top of each other. Default: False

kwargs**: Optional keyword arguments of bokeh.plotting.figure.patch


Let us consider the energy consumption split by source that can be downloaded as DataFrame via:

df_energy = pd.read_csv(r"https://raw.githubusercontent.com/PatrikHlobil/Pandas-Bokeh/master/docs/Testdata/energy/energy.csv", 
parse_dates=["Year"])
df_energy.head()
YearOilGasCoalNuclear EnergyHydroelectricityOther Renewable
1970-01-012291.5826.71467.317.7265.85.8
1971-01-012427.7884.81459.224.9276.46.3
1972-01-012613.9933.71475.734.1288.96.8
1973-01-012818.1978.01519.645.9292.57.3
1974-01-012777.31001.91520.959.6321.17.7


Creating the Areaplot can be achieved via:

df_energy.plot_bokeh.area(
    x="Year",
    stacked=True,
    legend="top_left",
    colormap=["brown", "orange", "black", "grey", "blue", "green"],
    title="Worldwide energy consumption split by energy source",
    ylabel="Million tonnes oil equivalent",
    ylim=(0, 16000))

areaplot

Note that the energy consumption of fossile energy is still increasing and renewable energy sources are still small in comparison 😢!!! However, when we norm the plot using the normed keyword, there is a clear trend towards renewable energies in the last decade:

df_energy.plot_bokeh.area(
    x="Year",
    stacked=True,
    normed=100,
    legend="bottom_left",
    colormap=["brown", "orange", "black", "grey", "blue", "green"],
    title="Worldwide energy consumption split by energy source",
    ylabel="Million tonnes oil equivalent")

areaplot2

Pieplot

For Pieplots, let us consider a dataset showing the results of all Bundestags elections in Germany since 2002:

df_pie = pd.read_csv(r"https://raw.githubusercontent.com/PatrikHlobil/Pandas-Bokeh/master/docs/Testdata/Bundestagswahl/Bundestagswahl.csv")
df_pie
Partei20022005200920132017
CDU/CSU38.535.233.841.532.9
SPD38.534.223.025.720.5
FDP7.49.814.64.810.7
Grünen8.68.110.78.48.9
Linke/PDS4.08.711.98.69.2
AfD0.00.00.00.012.6
Sonstige3.04.06.011.05.0

We can create a Pieplot of the last election in 2017 by specifying the "Partei" (german for party) column as the x column and the "2017" column as the y column for values:

df_pie.plot_bokeh.pie(
    x="Partei",
    y="2017",
    colormap=["blue", "red", "yellow", "green", "purple", "orange", "grey"],
    title="Results of German Bundestag Election 2017",
    )

pieplot

When you pass several columns to the y parameter (not providing the y-parameter assumes you plot all columns), multiple nested pieplots will be shown in one plot:

df_pie.plot_bokeh.pie(
    x="Partei",
    colormap=["blue", "red", "yellow", "green", "purple", "orange", "grey"],
    title="Results of German Bundestag Elections [2002-2017]",
    line_color="grey")

pieplot2

Mapplot

The mapplot method of Pandas-Bokeh allows for plotting geographic points stored in a Pandas DataFrame on an interactive map. For more advanced Geoplots for line and polygon shapes have a look at the Geoplots examples for the GeoPandas API of Pandas-Bokeh.

For mapplots, only (latitude, longitude) pairs in geographic projection (WGS84) can be plotted on a map. The basic API has the following 2 base parameters:

  • x: name of the longitude column of the DataFrame
  • y: name of the latitude column of the DataFrame

The other optional keyword arguments are discussed in the section about the GeoPandas API, e.g. category for coloring the points.

Below an example of plotting all cities for more than 1 million inhabitants:

df_mapplot = pd.read_csv(r"https://raw.githubusercontent.com/PatrikHlobil/Pandas-Bokeh/master/docs/Testdata/populated%20places/populated_places.csv")
df_mapplot.head()
namepop_maxlatitudelongitudesize
Mesa108539433.423915-111.7360841.085394
Sharjah110302725.37138355.4064781.103027
Changwon108149935.219102128.5835621.081499
Sheffield129290053.366677-1.4999971.292900
Abbottabad118364734.14950373.1995011.183647
df_mapplot["size"] = df_mapplot["pop_max"] / 1000000
df_mapplot.plot_bokeh.map(
    x="longitude",
    y="latitude",
    hovertool_string="""<h2> @{name} </h2> 
    
                        <h3> Population: @{pop_max} </h3>""",
    tile_provider="STAMEN_TERRAIN_RETINA",
    size="size", 
    figsize=(900, 600),
    title="World cities with more than 1.000.000 inhabitants")

 

Mapplot

Geoplots

Pandas-Bokeh also allows for interactive plotting of Maps using GeoPandas by providing a geopandas.GeoDataFrame.plot_bokeh() method. It allows to plot the following geodata on a map :

  • Points/MultiPoints
  • Lines/MultiLines
  • Polygons/MultiPolygons

Note: t is not possible to mix up the objects types, i.e. a GeoDataFrame with Points and Lines is for example not allowed.

Les us start with a simple example using the "World Borders Dataset" . Let us first import all neccessary libraries and read the shapefile:

import geopandas as gpd
import pandas as pd
import pandas_bokeh
pandas_bokeh.output_notebook()

#Read in GeoJSON from URL:
df_states = gpd.read_file(r"https://raw.githubusercontent.com/PatrikHlobil/Pandas-Bokeh/master/docs/Testdata/states/states.geojson")
df_states.head()
STATE_NAMEREGIONPOPESTIMATE2010POPESTIMATE2011POPESTIMATE2012POPESTIMATE2013POPESTIMATE2014POPESTIMATE2015POPESTIMATE2016POPESTIMATE2017geometry
Hawaii413638171378323139277214080381417710142632014286831427538(POLYGON ((-160.0738033454681 22.0041773479577...
Washington467413866819155689089969634107046931715281872809347405743(POLYGON ((-122.4020153103835 48.2252163723779...
Montana4990507996866100352210119211019931102831710386561050493POLYGON ((-111.4754253002074 44.70216236909688...
Maine113275681327968132810113279751328903132778713302321335907(POLYGON ((-69.77727626137293 44.0741483685119...
North Dakota2674518684830701380722908738658754859755548755393POLYGON ((-98.73043728833767 45.93827137024809...

Plotting the data on a map is as simple as calling:

df_states.plot_bokeh(simplify_shapes=10000)

US_States_1

We also passed the optional parameter simplify_shapes (~meter) to improve plotting performance (for a reference see shapely.object.simplify). The above geolayer thus has an accuracy of about 10km.

Many keyword arguments like xlabel, ylabel, xlim, ylim, title, colormap, hovertool, zooming, panning, ... for costumizing the plot are also available for the geoplotting API and can be uses as in the examples shown above. There are however also many other options especially for plotting geodata:

  • geometry_column: Specify the column that stores the geometry-information (default: "geometry")
  • hovertool_columns: Specify column names, for which values should be shown in hovertool
  • hovertool_string: If specified, this string will be used for the hovertool (@{column} will be replaced by the value of the column for the element the mouse hovers over, see also Bokeh documentation)
  • colormap_uselog: If set True, the colormapper is using a logscale. Default: False
  • colormap_range: Specify the value range of the colormapper via (min, max) tuple
  • tile_provider: Define build-in tile provider for background maps. Possible values: None, 'CARTODBPOSITRON', 'CARTODBPOSITRON_RETINA', 'STAMEN_TERRAIN', 'STAMEN_TERRAIN_RETINA', 'STAMEN_TONER', 'STAMEN_TONER_BACKGROUND', 'STAMEN_TONER_LABELS'. Default: CARTODBPOSITRON_RETINA
  • tile_provider_url: An arbitraty tile_provider_url of the form '/{Z}/{X}/{Y}*.png' can be passed to be used as background map.
  • tile_attribution: String (also HTML accepted) for showing attribution for tile source in the lower right corner
  • tile_alpha: Sets the alpha value of the background tile between [0, 1]. Default: 1

One of the most common usage of map plots are choropleth maps, where the color of a the objects is determined by the property of the object itself. There are 3 ways of drawing choropleth maps using Pandas-Bokeh, which are described below.

Categories

This is the simplest way. Just provide the category keyword for the selection of the property column:

  • category: Specifies the column of the GeoDataFrame that should be used to draw a choropleth map
  • show_colorbar: Whether or not to show a colorbar for categorical plots. Default: True

Let us now draw the regions as a choropleth plot using the category keyword (at the moment, only numerical columns are supported for choropleth plots):

df_states.plot_bokeh(
    figsize=(900, 600),
    simplify_shapes=5000,
    category="REGION",
    show_colorbar=False,
    colormap=["blue", "yellow", "green", "red"],
    hovertool_columns=["STATE_NAME", "REGION"],
    tile_provider="STAMEN_TERRAIN_RETINA")

When hovering over the states, the state-name and the region are shown as specified in the hovertool_columns argument.

US_States_2

 

Dropdown

By passing a list of column names of the GeoDataFrame as the dropdown keyword argument, a dropdown menu is shown above the map. This dropdown menu can be used to select the choropleth layer by the user. :

df_states["STATE_NAME_SMALL"] = df_states["STATE_NAME"].str.lower()

df_states.plot_bokeh(
    figsize=(900, 600),
    simplify_shapes=5000,
    dropdown=["POPESTIMATE2010", "POPESTIMATE2017"],
    colormap="Viridis",
    hovertool_string="""
                        <img
                        src="https://www.states101.com/img/flags/gif/small/@STATE_NAME_SMALL.gif" 
                        height="42" alt="@imgs" width="42"
                        style="float: left; margin: 0px 15px 15px 0px;"
                        border="2"></img>
                
                        <h2>  @STATE_NAME </h2>
                        <h3> 2010: @POPESTIMATE2010 </h3>
                        <h3> 2017: @POPESTIMATE2017 </h3>""",
    tile_provider_url=r"http://c.tile.stamen.com/watercolor/{Z}/{X}/{Y}.jpg",
    tile_attribution='Map tiles by <a href="http://stamen.com">Stamen Design</a>, under <a href="http://creativecommons.org/licenses/by/3.0">CC BY 3.0</a>. Data by <a href="http://openstreetmap.org">OpenStreetMap</a>, under <a href="http://www.openstreetmap.org/copyright">ODbL</a>.'
    )

US_States_3

Using hovertool_string, one can pass a string that can contain arbitrary HTML elements (including divs, images, ...) that is shown when hovering over the geographies (@{column} will be replaced by the value of the column for the element the mouse hovers over, see also Bokeh documentation).

Here, we also used an OSM tile server with watercolor style via tile_provider_url and added the attribution via tile_attribution.

Sliders

Another option for interactive choropleth maps is the slider implementation of Pandas-Bokeh. The possible keyword arguments are here:

  • slider: By passing a list of column names of the GeoDataFrame, a slider can be used to . This dropdown menu can be used to select the choropleth layer by the user.
  • slider_range: Pass a range (or numpy.arange) of numbers object to relate the sliders values with the slider columns. By passing range(0,10), the slider will have values [0, 1, 2, ..., 9], when passing numpy.arange(3,5,0.5), the slider will have values [3, 3.5, 4, 4.5]. Default: range(0, len(slider))
  • slider_name: Specifies the title of the slider. Default is an empty string.

This can be used to display the change in population relative to the year 2010:


#Calculate change of population relative to 2010:
for i in range(8):
    df_states["Delta_Population_201%d"%i] = ((df_states["POPESTIMATE201%d"%i] / df_states["POPESTIMATE2010"]) -1 ) * 100

#Specify slider columns:
slider_columns = ["Delta_Population_201%d"%i for i in range(8)]

#Specify slider-range (Maps "Delta_Population_2010" -> 2010, 
#                           "Delta_Population_2011" -> 2011, ...):
slider_range = range(2010, 2018)

#Make slider plot:
df_states.plot_bokeh(
    figsize=(900, 600),
    simplify_shapes=5000,
    slider=slider_columns,
    slider_range=slider_range,
    slider_name="Year", 
    colormap="Inferno",
    hovertool_columns=["STATE_NAME"] + slider_columns,
    title="Change of Population [%]")

US_States_4



 

Plot multiple geolayers

If you wish to display multiple geolayers, you can pass the Bokeh figure of a Pandas-Bokeh plot via the figure keyword to the next plot_bokeh() call:

import geopandas as gpd
import pandas_bokeh
pandas_bokeh.output_notebook()

# Read in GeoJSONs from URL:
df_states = gpd.read_file(r"https://raw.githubusercontent.com/PatrikHlobil/Pandas-Bokeh/master/docs/Testdata/states/states.geojson")
df_cities = gpd.read_file(
    r"https://raw.githubusercontent.com/PatrikHlobil/Pandas-Bokeh/master/docs/Testdata/populated%20places/ne_10m_populated_places_simple_bigcities.geojson"
)
df_cities["size"] = df_cities.pop_max / 400000

#Plot shapes of US states (pass figure options to this initial plot):
figure = df_states.plot_bokeh(
    figsize=(800, 450),
    simplify_shapes=10000,
    show_figure=False,
    xlim=[-170, -80],
    ylim=[10, 70],
    category="REGION",
    colormap="Dark2",
    legend="States",
    show_colorbar=False,
)

#Plot cities as points on top of the US states layer by passing the figure:
df_cities.plot_bokeh(
    figure=figure,         # <== pass figure here!
    category="pop_max",
    colormap="Viridis",
    colormap_uselog=True,
    size="size",
    hovertool_string="""<h1>@name</h1>
                        <h3>Population: @pop_max </h3>""",
    marker="inverted_triangle",
    legend="Cities",
)

Multiple Geolayers


Point & Line plots:

Below, you can see an example that use Pandas-Bokeh to plot point data on a map. The plot shows all cities with a population larger than 1.000.000. For point plots, you can select the marker as keyword argument (since it is passed to bokeh.plotting.figure.scatter). Here an overview of all available marker types:

gdf = gpd.read_file(r"https://raw.githubusercontent.com/PatrikHlobil/Pandas-Bokeh/master/docs/Testdata/populated%20places/ne_10m_populated_places_simple_bigcities.geojson")
gdf["size"] = gdf.pop_max / 400000

gdf.plot_bokeh(
    category="pop_max",
    colormap="Viridis",
    colormap_uselog=True,
    size="size",
    hovertool_string="""<h1>@name</h1>
                        <h3>Population: @pop_max </h3>""",
    xlim=[-15, 35],
    ylim=[30,60],
    marker="inverted_triangle");

Pointmap

In a similar way, also GeoDataFrames with (multi)line shapes can be drawn using Pandas-Bokeh.


 


Colorbar formatting:

If you want to display the numerical labels on your colorbar with an alternative to the scientific format, you can pass in a one of the bokeh number string formats or an instance of one of the bokeh.models.formatters to the colorbar_tick_format argument in the geoplot

An example of using the string format argument:

df_states = gpd.read_file(r"https://raw.githubusercontent.com/PatrikHlobil/Pandas-Bokeh/master/docs/Testdata/states/states.geojson")

df_states["STATE_NAME_SMALL"] = df_states["STATE_NAME"].str.lower()

# pass in a string format to colorbar_tick_format to display the ticks as 10m rather than 1e7
df_states.plot_bokeh(
    figsize=(900, 600),
    category="POPESTIMATE2017",
    simplify_shapes=5000,    
    colormap="Inferno",
    colormap_uselog=True,
    colorbar_tick_format="0.0a")

colorbar_tick_format with string argument

An example of using the bokeh PrintfTickFormatter:

df_states = gpd.read_file(r"https://raw.githubusercontent.com/PatrikHlobil/Pandas-Bokeh/master/docs/Testdata/states/states.geojson")

df_states["STATE_NAME_SMALL"] = df_states["STATE_NAME"].str.lower()

for i in range(8):
    df_states["Delta_Population_201%d"%i] = ((df_states["POPESTIMATE201%d"%i] / df_states["POPESTIMATE2010"]) -1 ) * 100

# pass in a PrintfTickFormatter instance colorbar_tick_format to display the ticks with 2 decimal places  
df_states.plot_bokeh(
    figsize=(900, 600),
    category="Delta_Population_2017",
    simplify_shapes=5000,    
    colormap="Inferno",
    colorbar_tick_format=PrintfTickFormatter(format="%4.2f"))

colorbar_tick_format with bokeh.models.formatter_instance


Outputs, Formatting & Layouts

Output options

The pandas.DataFrame.plot_bokeh API has the following additional keyword arguments:

  • show_figure: If True, the resulting figure is shown (either in the notebook or exported and shown as HTML file, see Basics. If False, None is returned. Default: True
  • return_html: If True, the method call returns an HTML string that contains all Bokeh CSS&JS resources and the figure embedded in a div. This HTML representation of the plot can be used for embedding the plot in an HTML document. Default: False

If you have a Bokeh figure or layout, you can also use the pandas_bokeh.embedded_html function to generate an embeddable HTML representation of the plot. This can be included into any valid HTML (note that this is not possible directly with the HTML generated by the pandas_bokeh.output_file output option, because it includes an HTML header). Let us consider the following simple example:

#Import Pandas and Pandas-Bokeh (if you do not specify an output option, the standard is
#output_file):
import pandas as pd
import pandas_bokeh

#Create DataFrame to Plot:
import numpy as np
x = np.arange(-10, 10, 0.1)
sin = np.sin(x)
cos = np.cos(x)
tan = np.tan(x)
df = pd.DataFrame({"x": x, "sin(x)": sin, "cos(x)": cos, "tan(x)": tan})

#Make Bokeh plot from DataFrame using Pandas-Bokeh. Do not show the plot, but export
#it to an embeddable HTML string:
html_plot = df.plot_bokeh(
    kind="line",
    x="x",
    y=["sin(x)", "cos(x)", "tan(x)"],
    xticks=range(-20, 20),
    title="Trigonometric functions",
    show_figure=False,
    return_html=True,
    ylim=(-1.5, 1.5))

#Write some HTML and embed the HTML plot below it. For production use, please use
#Templates and the awesome Jinja library.
html = r"""
<script type="text/x-mathjax-config">
  MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}});
</script>
<script type="text/javascript"
  src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>

<h1> Trigonometric functions </h1>

<p> The basic trigonometric functions are:</p>

<p>$ sin(x) $</p>
<p>$ cos(x) $</p>
<p>$ tan(x) = \frac{sin(x)}{cos(x)}$</p>

<p>Below is a plot that shows them</p>

""" + html_plot

#Export the HTML string to an external HTML file and show it:
with open("test.html" , "w") as f:
    f.write(html)
    
import webbrowser
webbrowser.open("test.html")

This code will open up a webbrowser and show the following page. As you can see, the interactive Bokeh plot is embedded nicely into the HTML layout. The return_html option is ideal for the use in a templating engine like Jinja.

Embedded HTML

Auto Scaling Plots

For single plots that have a number of x axis values or for larger monitors, you can auto scale the figure to the width of the entire jupyter cell by setting the sizing_mode parameter.

df = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd']) df.plot_bokeh(kind="bar", figsize=(500, 200), sizing_mode="scale_width")

Scaled Plot

The figsize parameter can be used to change the height and width as well as act as a scaling multiplier against the axis that is not being scaled.

 

Number formats

To change the formats of numbers in the hovertool, use the number_format keyword argument. For a documentation about the format to pass, have a look at the Bokeh documentation.Let us consider some examples for the number 3.141592653589793:

FormatOutput
03
0.0003.141
0.00 $3.14 $

This number format will be applied to all numeric columns of the hovertool. If you want to make a very custom or complicated hovertool, you should probably use the hovertool_string keyword argument, see e.g. this example. Below, we use the number_format parameter to specify the "Stock Price" format to 2 decimal digits and an additional $ sign.

import numpy as np

#Lineplot:
np.random.seed(42)
df = pd.DataFrame({
    "Google": np.random.randn(1000) + 0.2,
    "Apple": np.random.randn(1000) + 0.17
},
                  index=pd.date_range('1/1/2000', periods=1000))
df = df.cumsum()
df = df + 50
df.plot_bokeh(
    kind="line",
    title="Apple vs Google",
    xlabel="Date",
    ylabel="Stock price [$]",
    yticks=[0, 100, 200, 300, 400],
    ylim=(0, 400),
    colormap=["red", "blue"],
    number_format="1.00 $")

Number format

Suppress scientific notation for axes

If you want to suppress the scientific notation for axes, you can use the disable_scientific_axes parameter, which accepts one of "x", "y", "xy":

df = pd.DataFrame({"Animal": ["Mouse", "Rabbit", "Dog", "Tiger", "Elefant", "Wale"],
                   "Weight [g]": [19, 3000, 40000, 200000, 6000000, 50000000]})
p_scientific = df.plot_bokeh(x="Animal", y="Weight [g]", show_figure=False)
p_non_scientific = df.plot_bokeh(x="Animal", y="Weight [g]", disable_scientific_axes="y", show_figure=False,)
pandas_bokeh.plot_grid([[p_scientific, p_non_scientific]], plot_width = 450)

Number format

 

Dashboard Layouts

As shown in the Scatterplot Example, combining plots with plots or other HTML elements is straighforward in Pandas-Bokeh due to the layout capabilities of Bokeh. The easiest way to generate a dashboard layout is using the pandas_bokeh.plot_grid method (which is an extension of bokeh.layouts.gridplot):

import pandas as pd
import numpy as np
import pandas_bokeh
pandas_bokeh.output_notebook()

#Barplot:
data = {
    'fruits':
    ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries'],
    '2015': [2, 1, 4, 3, 2, 4],
    '2016': [5, 3, 3, 2, 4, 6],
    '2017': [3, 2, 4, 4, 5, 3]
}
df = pd.DataFrame(data).set_index("fruits")
p_bar = df.plot_bokeh(
    kind="bar",
    ylabel="Price per Unit [€]",
    title="Fruit prices per Year",
    show_figure=False)

#Lineplot:
np.random.seed(42)
df = pd.DataFrame({
    "Google": np.random.randn(1000) + 0.2,
    "Apple": np.random.randn(1000) + 0.17
},
                  index=pd.date_range('1/1/2000', periods=1000))
df = df.cumsum()
df = df + 50
p_line = df.plot_bokeh(
    kind="line",
    title="Apple vs Google",
    xlabel="Date",
    ylabel="Stock price [$]",
    yticks=[0, 100, 200, 300, 400],
    ylim=(0, 400),
    colormap=["red", "blue"],
    show_figure=False)

#Scatterplot:
from sklearn.datasets import load_iris
iris = load_iris()
df = pd.DataFrame(iris["data"])
df.columns = iris["feature_names"]
df["species"] = iris["target"]
df["species"] = df["species"].map(dict(zip(range(3), iris["target_names"])))
p_scatter = df.plot_bokeh(
    kind="scatter",
    x="petal length (cm)",
    y="sepal width (cm)",
    category="species",
    title="Iris DataSet Visualization",
    show_figure=False)

#Histogram:
df_hist = pd.DataFrame({
    'a': np.random.randn(1000) + 1,
    'b': np.random.randn(1000),
    'c': np.random.randn(1000) - 1
},
                       columns=['a', 'b', 'c'])

p_hist = df_hist.plot_bokeh(
    kind="hist",
    bins=np.arange(-6, 6.5, 0.5),
    vertical_xlabel=True,
    normed=100,
    hovertool=False,
    title="Normal distributions",
    show_figure=False)

#Make Dashboard with Grid Layout:
pandas_bokeh.plot_grid([[p_line, p_bar], 
                        [p_scatter, p_hist]], plot_width=450)

Dashboard Layout

Using a combination of row and column elements (see also Bokeh Layouts) allow for a very easy general arrangement of elements. An alternative layout to the one above is:

p_line.plot_width = 900
p_hist.plot_width = 900

layout = pandas_bokeh.column(p_line,
                pandas_bokeh.row(p_scatter, p_bar),
                p_hist)

pandas_bokeh.show(layout)

Alternative Dashboard Layout


 



 

 

Release Notes

Release Notes can be found here.

Contributing to Pandas-Bokeh

If you wish to contribute to the development of Pandas-Bokeh you can follow the instructions on the CONTRIBUTING.md.

 

Author: PatrikHlobil
Source Code: https://github.com/PatrikHlobil/Pandas-Bokeh 
License: MIT License

#machine-learning  #datavisualizations #python