1671139320
In this R Shiny article, we will learn about How to Create interactive Google Maps with R Shiny. Visualizing spatial data in R is oftentimes more challenging than it initially seems. There’s a ton of moving parts, like map center point, zoom, map type, marker size, and so on. And then there’s cartographic design and managing aesthetic appeal along with the functional elements. It goes without saying, but you have to get all of them right to have a cohesive map.
That’s where this article comes in. We’ll show you how to build interactive Google Maps with R, and how to include them in R Shiny dashboards. With Google Maps in R, you can give your users a familiar map and retain customizability by placing it within a Shiny dashboard.
To follow along, you’ll have to set up a Google Cloud account. It’s fairly simple, but we’ll show you how.
As the name suggests, Google Maps are developed by Google and offered as an API through their Cloud Platform. You’ll have to register an account and set up billing. Yes, you do need to put in credit card information, but they won’t charge you a dime without your knowledge.
Once you have an account and a project setup, you can enable APIs required for communication between R and Google. These are:
To enable them, click on the menu icon, go under APIs & Services and click on Library. You can search for the APIs from there and enable all three. Once done, you’ll see them listed on the main dashboard page:
Image 1 – Enabled APIs on the Google Cloud Platform
Yours should show zeros under Requests if you’re setting up a new Google Cloud account. The final step is to create a new API key. Go under Credentials and click on + Create credentials to get a new API key:
Image 2 – Creating Maps API key
Once created, copy the string code somewhere safe. You’ll need it in the following section, where you’ll create a couple of interactive Google Maps with R.
You’ll need the googleway
library to follow along, so make sure to have it installed. Dataset-wise, you’ll use the US airports dataset from Kaggle:
Image 3 – List of US Airports dataset
Copy the following code snippet to load in the library and the data — don’t forget to change the working directory and the API key:
Here are the first six dataset rows:
Image 4 – Head of the US Airports dataset
The dataset has everything needed for map visualization. Latitude and longitude are generally everything you need, but you’ll see how the additional columns can make the map richer.
Let’s start with the basics. We’ll add markers to every latitude and longitude combination from the dataset. You can use the google_map()
function from googleway
package to specify the API key and the dataset, and then call the add_markers()
function to add the points:
map <- google_map(
key = api_key,
data = data
)
map %>%
add_markers(lat = "LATITUDE", lon = "LONGITUDE")
Image 5 – A basic map of US airport locations
There are many built-in options you don’t have to worry about. You can zoom in, zoom out, change the map type, and even use Google Street View! You can also tweak how the markers look like. We’ll explore how, next.
The googleway
library is quite restrictive with marker colors. You can go with red (default), blue, green, or lavender. To make matters worse, the package requires an additional column in the dataset for the color value. It’s useful if you’re using some conditional logic to color the markers, but a complete overkill if you aren’t.
data$COLOR <- "green"
map <- google_map(
key = api_key,
data = data
)
map %>%
add_markers(lat = "LATITUDE", lon = "LONGITUDE", colour = "COLOR")
Image 6 – Changing the marker color
Long story short, you’re quite limited in color selection if you’re using google
way to build interactive Google Maps with R.
Adding mouse over effect to your interactive Google Maps with R is strongly advised if you want to make maps users will love. The mouse over effect represents what happens as the user pulls the mouse cursor over a marker.
The code snippet below shows you how to add an airport name, stored in the AIRPORT
column:
map <- google_map(
key = api_key,
data = data
)
map %>%
add_markers(lat = "LATITUDE", lon = "LONGITUDE", mouse_over = "AIRPORT")
Image 7 – Adding mouse over text
Mouse over effect can become a nightmare if your map has many markers, or if there’s almost no spacing between them. If that’s the case, you might want to consider info windows instead.
Unlike mouse overs, info windows will activate only if the user clicks on a marker in your interactive map. Keep in mind — the info window also has to be closed manually by the user.
To spice things up, we’ll add an additional column that contains the airport name, city, and state. You can add and customize yours easily with the paste0()
function.
data$INFO <- paste0(data$AIRPORT, " | ", data$CITY, ", ", data$STATE)
map <- google_map(
key = api_key,
data = data
)
map %>%
add_markers(lat = "LATITUDE", lon = "LONGITUDE", info_window = "INFO")
Image 8 – Adding info window on marker click
It’s hard to tell if info windows are better than mouse overs for this dataset, as markers aren’t packed extremely tightly. Go with the one that feels more natural.
You now know the basics of interactive Google Maps with R and googleway
. Next, you’ll see how to embed them to an R Shiny dashboard.
You’ll now see how easy it is to add interactive Google Maps to R Shiny dashboards. The one you’ll see below allows the user to specify a list of states from which the airports are displayed.
Keep in mind:
google_mapOutput()
function as a placeholder for the interactive Google map.renderGoogle_map()
function to display the map. The data is prepared earlier as a reactive component to match the user-selected states and display the airport, city, and state info on mouse over.Here’s the entire code:
setwd("<your-dataset-directory>")
library(shiny)
library(dplyr)
library(googleway)
api_key <- "<your-api-key>"
airports <- read.csv("airports.csv")
ui <- fluidPage(
tags$h1("US Airports"),
fluidRow(
column(
width = 3,
selectInput(inputId = "inputState", label = "Select state:", multiple = TRUE, choices = sort(airports$STATE), selected = "NY")
),
column(
width = 9,
google_mapOutput(outputId = "map")
)
)
)
server <- function(input, output) {
data <- reactive({
airports %>%
filter(STATE %in% input$inputState) %>%
mutate(INFO = paste0(AIRPORT, " | ", CITY, ", ", STATE))
})
output$map <- renderGoogle_map({
google_map(data = data(), key = api_key) %>%
add_markers(lat = "LATITUDE", lon = "LONGITUDE", mouse_over = "INFO")
})
}
shinyApp(ui = ui, server = server)
Image 9 – US Airports R Shiny dashboard
The googleway
library configures the location and the zoom level automatically for you. The map was zoomed to New York by default and changed instantly when we added more states. Overall, we have a decent-looking map for 30-something lines of code.
Original article sourced at: https://appsilon.com
1670984520
In this article we will learn about How to connect R Shiny database with Postgres – The Definitive Guide. Managing database connections can be messy at times. It’s always easier to read and write to local CSV files. That doesn’t mean it’s the right thing to do, as most production environments have data stored in one or multiple databases. As a data professional, you must know how to connect to different databases through different programming languages.
Today you’ll learn how to connect R and R Shiny to the Postgres database – one of the most well-known open-source databases. There’s no better way to learn than through a hands-on example, so you’ll also create a fully-working interactive dashboard with R Shiny.
We assume you already have the Postgres database installed, as well as a GUI management tool like pgAdmin. Installation is OS-specific, so we won’t go through that today. You can Google the installation steps – it shouldn’t take more than a couple of minutes.
As our data source, we’ll use the Quakes dataset from Kaggle. Download it as a CSV file – we’ll load it into the Postgres database shortly:
Image 1 – Quakes dataset from Kaggle
We’ve chosen this dataset deliberately because it has only four columns. This way, the time spent importing it into the Postgres database is minimal.
To start, create a new table using the following SQL query. We’ve named ours earthquakes
:
CREATE TABLE earthquakes(
focal_depth SMALLINT,
latitude REAL,
longitude REAL,
richter REAL
)
Once created, right-click on the table name and select the Import/Export option:
Image 2 – Importing CSV file into Postgres database (1)
A new modal window will open. There are a couple of things to do here:
Here’s what the window should look like:
Image 3 – Importing CSV file into Postgres database (2)
We’re not quite done yet. Switch to the Columns tab to ensure all four are adequately listed:
Image 4 – Importing CSV file into Postgres database (3)
If everything looks as in the image above, you can click on the OK button. You should see a message telling you the dataset was successfully imported after a couple of seconds:
Image 5 – Importing CSV file into Postgres database (4)
Execute the following SQL command from the query editor to double-check:
SELECT * FROM earthquakes;
You should see the table populated with data:
Image 6 – Checking if import to Postgres database was successful
To conclude – we now have the dataset loaded into the Postgres database. But how can we connect to it from R and R Shiny? That’s what we’ll answer next.
You’ll need two packages to get started, so install them before doing anything else:
install.packages(“RPostgreSQL”)
install.packages(“RPostgres”)
You also need to know a couple of things to connect R to the Postgres database:
postgres
by default, but check by listing the databases in pgAdmin.localhost
, since we’re using the locally-installed database.5432
in our case – check yours by right-clicking the database server and checking the value under Properties – Connection – Port.Once you know these, connecting to the Postgres database is as simple as a function call:
library(DBI)
db <- "postgres"
db_host <- "localhost"
db_port <- "5432"
db_user <- "<your_user>"
db_pass <- "<your_password>"
conn <- dbConnect(
RPostgres::Postgres(),
dbname = db,
host = db_host,
port = db_port,
user = db_user,
password = db_pass
)
You won’t see any output, so how can you know the connection was established? Simple, just list the available tables:
dbListTables(conn)
Image 7 – Listing available tables in the Postgres database through R
The earthquakes
table is the only one visible, so we’re on the right track. If you want to fetch data from it, use the dbGetQuery
function as shown below:
dbGetQuery(conn, "SELECT * FROM earthquakes LIMIT 5")
Image 8 – Fetching data from the Postgres database through R
And there you have it – concrete proof we have successfully established a connection to the Postgres database. Let’s see how to integrate it with R Shiny next.
We’ll create a simple R Shiny dashboard that renders a map of earthquakes of the specified magnitude. The dashboard will have a slider that controls the minimum magnitude required for the earthquake to be displayed on the map.
When the slider value changes, R Shiny connects to the Postgres database and fetches earthquakes with the currently specified magnitude and above.
The dashboard uses the Leaflet package for rendering the map and shows a custom icon as a marker.
library(DBI)
library(shiny)
library(glue)
library(leaflet)
# Load in the custom icon
icon_url <- "/Users/dradecic/Desktop/quake_icon.png"
quake_icon <- makeIcon(
iconUrl = iconURL,
iconWidth = 24, iconHeight = 24
)
ui <- fluidPage(
tags$h1("Earthquakes"),
# Slider to control the minimum magnitude
sliderInput(inputId = "magSlider", label = "Minimum magnitude:", min = 0, max = 10, value = 0, step = 0.1),
# Map output
leafletOutput(outputId = "map")
)
server <- function(input, output) {
data <- reactive({
# Connect to the DB
conn <- dbConnect(
RPostgres::Postgres(),
dbname = "postgres",
host = "localhost",
port = "5432",
user = "<your_username>",
password = "<your_password>"
)
# Get the data
quakes <- dbGetQuery(conn, glue("SELECT * FROM earthquakes WHERE richter >= {input$magSlider}"))
# Disconnect from the DB
dbDisconnect(conn)
# Convert to data.frame
data.frame(quakes)
})
# Render map
output$map <- renderLeaflet({
leaflet(data = data()) %>%
addTiles() %>%
addMarkers(~longitude, ~latitude, label = ~richter, icon = quake_icon) %>%
addProviderTiles(providers$Esri.WorldStreetMap)
})
}
shinyApp(ui = ui, server = server)
Image 9 – R Shiny dashboard based on a Postgres database
You have to admit – developing PoC dashboards like the one above requires almost no effort. R Shiny does all the heavy lifting for you, and probably the most challenging part is maintaining the database connection.
You could extract the entire table outside the server
function, and that would make the dashboard faster. We’ve decided to establish a new connection every time the input changes to show you how to open and close database connections, and how to use the glue
package for better string interpolation.
Original article sourced at: https://appsilon.com
1670707620
In this R Shiny article, we will learn about How to Use The Dashboard in R Shiny. Data can be transformative when loaded through business intelligence software for strategic decision-making. The insights generated empower businesses to improve their processes, initiatives, and innovations. But can we improve the way business decision-makers access these insights? Enter the world of dashboards in R Shiny and see how RStudio and open source is revolutionizing business intelligence.
Dashboards are tools that provide up-to-date information, using visuals to communicate the stories behind the data. They guide decision-makers through the relationships of complex, big data. They present visuals in a practical order enabling quicker understanding and appreciation of data to the business.
Shiny is an open source R package from the team at RStudio, PBC. RStudio built Shiny to provide an elegant and easy-to-use web framework for developing web applications in R. Shiny enables R users to create incredible apps, interactive maps, and dashboards. And you don’t need advanced web dev skills to build it!
“Shiny combines the computational power of R with the interactivity of the modern web.” – RStudio
Share the story behind your data with quick builds and a variety of hosting and deployment options from RStudio. Deploy to the cloud to services like AWS with RStudio Workbench, use open source options for private projects, or use RStudio Connect to upgrade your Shiny Dashboard for enterprise applications.
Shiny dashboards let you access a complete web application framework within the R environment. Easily turn your work in R, analyses and visualizations, machine learning models, and more into web applications that deliver value to businesses. As a complete application, end-users don’t need an understanding of R to use it. Deliver a complete, easy-to-use, and interactive product that improves the way you do business.
Appsilon builds R Shiny applications for Fortune 500 companies. R and Shiny aren’t the best fit for everyone, but for those looking to solve tough problems with big data, consider adding R Shiny to your tech stack. Explore why companies are switching to R Shiny and why you should too.
Shiny’s web framework enables easy customization of the dashboard using custom HTML, CSS, SCSS, Javascript, and so on. This level of customization lets you create a unique, branded dashboard that’s not possible with other BI software suites. Add colors, logos, fonts, and more that better represent your business.
Shiny is open source and cost-friendly compared to its counterparts like Power BI and Tableau. You can explore a comprehensive look comparing Power BI to Shiny and Tableau to Shiny on the Appsilon blog.
Don’t want to part with Tableau? Don’t worry – you can now create Tableau Dashboard extensions with R Shiny.
RStudio provides easy-to-use deployment options that range from completely free to low-cost options with additional benefits including security and authentication, scaling, and performance reviews. The ease of deployment as a standalone app to analyze, summarize, and visualize your data story without breaking your budget is hard to beat.
Shiny as a development platform for dashboards gives you access to a wealth of R packages for data science like the Tidyverse. You can access advanced graphical features for data and model representation. Embed these visuals on Shiny dashboards and add interactivity and responsiveness. You can achieve this through an interface enabled in R to connect with plotting libraries based on JavaScript.
Looking at the architecture of the Shiny dashboard, the use of functions, modules, and packages, and rapid prototyping help in the management and organization of code. You can create smaller, manageable, and testable components of the dashboards and simple source code controls.
Appsilon has built Shiny dashboards for Fortune 500 companies, NGOs, non-profits, and governments. Over the years they’ve built a showcase of professional and unique Shiny dashboards. RStudio also houses contributions from the Shiny app developer community on their Shiny Gallery. I’ve selected a few examples below:
Check out these 5 Shiny dashboards developed for enterprises and decision-makers.
Christian Luz developed the Shiny app below in “the context of a 1339-bed academic tertiary referral hospital to handle data of more than 180,000 admissions.” The app contains 17 unique criteria from which users can filter patient groups. Users can investigate the use of antimicrobials, microbiological diagnostics, and antimicrobial resistance. Easily stratify and group the results of the investigation “to compare defined patient groups based on individual patient features.”
The Shiny dashboard below shows the New Zealand Trade Intelligence Dashboard. The dashboard shows up-to-date annual information on trade by commodities, services, and trading partners. Wei Zhang used intuitive, interactive graphs and tables to provide powerful functionality. Users can generate their own reports for different commodities and market groups. Wei Zhang developed and designed the dashboard for both PCs and mobile devices.
A Shiny application project is made up of a directory containing an R script saved as app.R. The script consists of code describing two components – the user interface object and the server function. Both of which are passed as arguments to the shinyApp function. Resulting in the creation of a Shiny app object either a web app or dashboard.
Note these two components – the UI/server pair can also be split into two separate files namely: ui.R and server.R. The project folder may also contain additional files relating to data, scripts, styles, or other resources required for deployment.
The ui.R, user interface function, contains code that defines how the dashboard looks containing the input and output controls. The server.R, server function containing code, defines how the dashboard works sending back the outputs and interacting with the input values. Shiny uses reactive programming to automatically update outputs when inputs change while the dashboard is running.
Appsilon recently release a bundle of Shiny dashboard templates that are free use and customizable. The best and most fun part is – it’s so easy to customize and deploy within minutes, no matter your development environment.
Appsilon selected the Shiny dashboards based on elements that are customizable and relevant to many projects. The goal is to encourage R Shiny adoption for dashboards and other applications by showcasing the ease of use and customization. The bundle contains four applications that we can select from based on our project.
First, we download the templates as a zip file from Appsilon Shiny Dashboard Templates. Fill out the form on the page to download on your local computer. Extract the zip file and save it in your preferred directory.
For this demo, we’ll use the Destination Overview. Let’s open each of the projects in RStudio. You might get a response like the one below:
It means we don’t have all dependencies used during the development of the templates. To fix this, we need to restore the R environment from the `renv` folder by running the command below from the R console:
Executing this code takes a little while to complete. There might be several packages to download and install to get the environment right. Once we have successfully completed this step, we can run the app by opening either ui.R or server.R and clicking the Run App button. The resulting dashboard is what you see below:
Finally, we can deploy the resulting R Shiny Dashboard freely on shinyapps.io. If you don’t have one already, you’ll need to register an account using your preferred email, Google, or GitHub credentials.
You’ll need to install the `rsconnect` package in RStudio and using the console, add your shinyapps.io credentials (name, token, and secret key) for authentication through the console. Use the code below to complete this process:
In summary, dashboards play a key role in business intelligence and analytics. Dashboards are a window to understand and track business metrics through different sources, sizes, and types of data. Dashboards simplify the decision-making process and collaboration among users. Data scientists don’t need to spend time explaining the results and consumers quickly understand the data story. And the availability of Shiny dashboards on the web or mobile platforms gives users more mobility and accessibility.
Shiny can also be neatly integrated into the development pipeline of data professionals using R. Shiny dashboards are versatile, interactive, and easily customized to meet the specific needs of each customer. This is in part due to its web framework that allows the use of web technologies like HTML, CSS, SCSS, JavaScript, and so on. Using Shiny enables the use of modularised codes and functions, rapid prototyping, and easy means of managing the dashboards through smaller components.
Original article sourced at: https://appsilon.com
1670622360
In this TailwindCSS tutorial, we will learn about How to Implement TailwindCSS in Shiny. At Appsilon we believe that a good-looking, well-designed interface is the key to making your Shiny Dashboard a success. TailwindCSS is a utility-first CSS framework created by one of the authors of Refactoring UI. As I really enjoyed the book, I decided to give it a shot and try using TailwindCSS in Shiny. The result of that is the Bee Colony Losses dashboard.
Today you’ll learn how to quickly configure TailwindCSS and how it can help you in building better-looking user interfaces in your Shiny dashboards.
Tailwind provides a Play CDN script, which allows you to use Tailwind right in your browser – that means no build steps, and no need to install any additional tools such as nodejs. To use it in Shiny, you just need to include the script in your UI definition:
library(shiny)
ui <- div(
tags$script(src = "https://cdn.tailwindcss.com")
)
server <- function(input, output, session) {
}
shinyApp(ui, server)
That’s it! Now we can start using TailwindCSS classes to prototype the UI of our dashboard. Keep in mind that the Play CDN script is not recommended for production usage, but in the case of prototyping, it is a good way to quickly get started.
TailwindCSS differs from other CSS frameworks that you might know e.g. Bootstrap or Fomantic UI. You do not have specific classes for UI components such as buttons or forms. What TailwindCSS offers is a set of utility classes that you combine to create your own designs. This means that two applications built with TailwindCSS are probably not going to look similar (see their Build Anything section on their website). Ultimately, it allows you to avoid having a typical Bootstrap look and stand out from other apps in terms of design.
Let’s assume that you want to make one of your UI components have rounded corners. Instead of defining your own class specifying the border-radius property, you can just apply the rounded utility class from TailwindCSS. Let’s see it in action with Shiny!
library(shiny)
ui <- div(
tags$script(src = "https://cdn.tailwindcss.com"),
div(
"I am a rounded box!",
class = "rounded bg-gray-300 w-64 p-2.5 m-2.5"
)
)
server <- function(input, output, session) {
}
shinyApp(ui, server)
As you can see by applying a couple of TailwindCSS classes, we were able to quickly define a component with rounded borders, a background color, a fixed width as well as padding and margins.
I know this approach to styling UI might seem weird, but it has its advantages. The utility classes provide you with good defaults – for example, the box shadow utility offsets your shadows by default, which makes shadows look more natural (see article).
Moreover, TailwindCSS comes with predefined scales, so if you want to add a larger shadow instead of using the shadow-md class, you can use larger equivalents e.g. shadow-lg, shadow-2xl. This narrows down your choices and prevents you from agonizing whether a shadow looks better with a 1px or 2px offset and as a result allows you to design faster. By using a predefined scale, you make your UI more consistent and follow the Law of Similarity.
The downside of using TailwindCSS in Shiny is that you might have trouble using base Shiny components such as selectInput and textInput. You might need to reimplement some of the JavaScript-based logic to set up the communication between your custom styled inputs and Shiny.
You only need one line of code to start playing around with TailwindCSS in your Shiny applications. TailwindCSS provides predefined systems, scales, and good defaults that allow you to quickly design a good-looking and consistent UI.
Original article sourced at: https://appsilon.com
1670526660
In this R Shiny tutorial, we'll dive into R Shiny Caching: the top 3 ways to cache interactive elements in R Shiny. Are your Shiny dashboards getting slow? Maybe it’s time to explore some R Shiny caching options. We’ve been there – a client wants dozens of charts on a single dashboard page, and wants it running smoothly. It’s easier said than done, especially if there’s a lot of data to show.
Are your Shiny dashboards getting slow? Maybe it’s time to explore some R Shiny caching options. We’ve been there – a client wants dozens of charts on a single dashboard page, and wants it running smoothly. It’s easier said than done, especially if there’s a lot of data to show.
Are you a newcomer to R and R Shiny? Here’s how you can make a career out of R Shiny development.
Today we’ll explore the top 3 methods of R Shiny caching to increase the speed and responsiveness of your dashboards
The renderCachedPlot()
function renders a reactive plot with plot images cached to disk. The function has many arguments, but there are two you must know:
expr
– expression that generates a plot, but doesn’t take reactive dependencies as renderPlot()
function does. It is re-executed only when the cache key changes.cacheKeyExpr
– an expression that upon evaluation returns an object which will be serialized and hashed using the digest()
function to generate a string that will be used as a cache key. If the cache key is the same as the previous time, it assumes the plot is the same and can be retrieved from the cache.When it comes to cache scoping, there are again multiple options. You can share cache across multiple sessions (cache = "app"
) which is the default behavior, or you can limit caching to a single session (cache = "session"
). Either way, the cache will be 10 MB in size and will be stored in memory, using a memoryCache
object.
To change any of these settings, you can call shinyOptions()
at the top of the file, as you’ll see shortly.
Looking to speed up your R Shiny app? Jump in the fast lane with our definitive guide to speeding up R Shiny.
Let’s see how to implement R Shiny caching with the renderCachedPlot()
function. The code snippet below shows you how to make an entire dataset as a part of the cache key. For reference, the example was taken from the official documentation page.
library(shiny)
shinyOptions(cache = cachem::cache_disk("./myapp-cache"))
mydata <- reactiveVal(data.frame(x = rnorm(400), y = rnorm(400)))
ui <- fluidPage(
sidebarLayout(
sidebarPanel(
sliderInput("n", "Number of points", 50, 400, 100, step = 50),
actionButton("newdata", "New data")
),
mainPanel(
plotOutput("plot")
)
)
)
server <- function(input, output, session) {
observeEvent(input$newdata, {
mydata(data.frame(x = rnorm(400), y = rnorm(400)))
})
output$plot <- renderCachedPlot({
Sys.sleep(2)
d <- mydata()
seqn <- seq_len(input$n)
plot(d$x[seqn], d$y[seqn], xlim = range(d$x), ylim = range(d$y))
},
cacheKeyExpr = {
list(input$n, mydata())
}
)
}
shinyApp(ui = ui, server = server)
Once you launch the app, you’ll see the following:
Image 1 – Basic R Shiny app that uses renderCachedPlot
At the surface level, everything looks normal. But what R does behind the surface is save the cache files to the myapp-cache
directory:
Image 2 – Contents of the myapp-cache directory
And that’s how you can cache the contents of the plot. Let’s explore another R Shiny caching option.
The bindCache()
function adds caching to reactive()
expression and render functions. It requires one or more expressions that are used to generate a cache key, which is used to determine if a computation has occurred before and can be retrieved from the cache.
By default, bindCache()
shares a cache with all user sessions connected to the application. It’s also possible to scope the cache to a session, just as we’ve seen in the previous section. The whole thing, at least setup-wise, works pretty much the same as the first option explored today.
Need to improve your Shiny dashboards? Explore Appsilon’s open-source packages to level up your performance and design.
We’ll demonstrate how bindCache()
works by examining an example from rdrr.io. In the most simple words, the example allows the user to specify two numbers with sliders and perform multiplication with a button.
The result is then displayed below. There’s nothing computationally expensive going on, but R sleeps for two seconds after the action button is clicked.
This is where R Shiny caching comes in. It caches the results for two given numbers, so each time you multiply the same numbers the calculation is done immediately:
library(shiny)
shinyOptions(cache = cachem::cache_disk("./bind-cache"))
ui <- fluidPage(
sliderInput("x", "x", 1, 10, 5),
sliderInput("y", "y", 1, 10, 5),
actionButton("go", "Go"),
div("x * y: "),
verbatimTextOutput("txt")
)
server <- function(input, output, session) {
r <- reactive({
message("Doing expensive computation...")
Sys.sleep(2)
input$x * input$y
}) %>%
bindCache(input$x, input$y) %>%
bindEvent(input$go)
output$txt <- renderText(r())
}
shinyApp(ui = ui, server = server)
Here’s what the corresponding Shiny app looks like:
Image 3 – R Shiny app that uses bindCache()
As before, cached results are saved to a folder on disk – but this time, to a folder named bind-cache
. Here are the contents after a couple of calculations:
Image 4 – Contents of the bind-cache directory
Easy, right? Let’s see how our last caching option for R Shiny works.
The memoise
R package is used for the so-called memoisation of functions. In plain English, it caches the results of a function so that when you call it again with the same arguments it returns the previously computed value.
To get started, you’ll first have to install this R package:
install.packages("memoise")
The approach to caching is a bit different than before. You’ll first want to specify where the cached files will be saved. We’ve used the cache_filesystem()
function for the task, but there are others available. Then, you’ll want to write your R functions, followed by a call to memoise()
with two arguments – your R function and the location where cached files should be stored.
From there, the story is pretty much identical to before. We’ve slightly modified the R Shiny app from the previous section – it’s now a dashboard with a sidebar layout and a bit more verbose declaration of elements and their parameters:
library(shiny)
library(memoise)
cd <- cache_filesystem("./r-memoise")
multiply_nums <- function(x, y) {
Sys.sleep(2)
return (x * y)
}
mmn <- memoise(multiply_nums, cache = cd)
ui <- fluidPage(
sidebarLayout(
sidebarPanel(
sliderInput(inputId = "x", label = "X", min = 1, max = 10, value = 5),
sliderInput(inputId = "y", label = "Y", min = 1, max = 10, value = 5),
actionButton(inputId = "calculate", label = "Multiply")
),
mainPanel(
div("X * Y = "),
verbatimTextOutput("txt")
),
fluid = TRUE
)
)
server <- function(input, output, session) {
r <- eventReactive(input$calculate, {
mmn(x = input$x, y = input$y)
})
output$txt <- renderText(r())
}
shinyApp(ui = ui, server = server)
Once launched, the R Shiny app looks as follows:
Image 5 – R Shiny app that uses the memoise package
Give it a go and multiply a couple of numbers. As before, if you were to repeat a calculation, ergo call the mmn()
function which calls multiply_nums()
function with previously seen arguments, your results will be fetched from the cache.
As expected, caching results are saved to the folder:
Image 6 – Contents of the r-memoise directory
And that’s how you can use the memoise
R package to cache results of R function, all wrapped in R Shiny. Let’s wrap things up next.
Today you’ve seen three basic examples of how R Shiny caching works. Truth be told, we’ve only scratched the surface, but it’s just enough to get you started. Diving deeper would require a dedicated article for each caching option, which is something we might do in the future.
Original article sourced at: https://appsilon.com
1670379720
In this article, we will learn about What is R shinyHeatmap?. How to Get Started with shinyHeatmap in R Shiny Apps. User session monitoring through heatmaps is huge. It allows you to see what works and what doesn’t for your R Shiny app, and generally how users interact with it. Also, it helps you and your organization build a user adoption strategy with user behavior analytics. So, how can you get started for free? The answer is simple – with the R
shinyHeatmap
package.
We’ve previously explored the R Shiny Hotjar option for monitoring user behavior. But this option leaves empty heatmaps for some. Why? Bugs that are notoriously difficult to track and resolve. Also, Hotjar is a freemium service, which means you’ll have to pay as soon as you exceed 35 sessions per day (July 2022). R shinyHeatmap
is different – it’s easier to get started with and is completely free of charge.
What is R shinyHeatmap?
The shinyHeatmap R package aims to provide a free and local alternative to more advanced user session monitoring platforms, such as Hotjar. It provides just enough features to let you know how users use your Shiny dashboards.
As of writing this post, shinyHeatmap
isn’t available on CRAN. To install it, you’ll first have to install the devtools
package through the R console:
install.packages("devtools")
Once installed, pull and install the shinyHeatmap
package through GitHub:
devtools::install_github("RinteRface/shinyHeatmap")
That’s actually all you need to get started. We’ll cover the hands-on part in a minute, but first, let’s discuss when shinyHeatmap
should be the tool of your choice, and when should you consider more advanced alternatives.
If you want a tool that looks good on paper and don’t care about cost – look no further than Hotjar. However, Hotjar can be more difficult to set up. You must have your R Shiny app deployed, which isn’t handy if you’re just starting out.
New to R Shiny app deployment? Here are top 3 methods you must know.
Further, you have to register an account and embed a tracking code in your app. After doing so, you have to redeploy the app. All in all, it’s not too complicated, but there are some bugs.
Hotjar sessions take some time to appear in your dashboard – if they appear at all. We at Appsilon and many others have found Hotjar to be buggy depending on the framework you’re using. Many users have reported that sessions aren’t displayed in the dashboards, and that’s a deal-breaking issue. For example, if you’re exploring options for Shiny for Python – some web frameworks are altogether not compatible with Hotjar like Electron.
Also, we have to discuss pricing. If you’re an indie developer, paying $31 a month when billed annually is just too expensive. It’s a negligible cost for a full-scale organization, but still, that’s the most basic premium plan allowing you to record 100 sessions per day.
Moral of the story: Hotjar is amazing if you can make it work and if you can afford it – if being the crucial part.
The R shinyHeatmap
package is different. It only requires a www
folder for saving logs, and what it collects is barebones. The logs are collected in JSON format where each interaction is a JSON object containing X and Y coordinates of an event. Because of this, you can rest assured that there won’t be any data privacy concerns. No actual user data is collected, only the coordinates of their clicks with the purpose of aggregation and visual interpretation.
This package is free and open-source. There are no fancy features such as events and identify APIs that come with a premium Hotjar plan, but that’s okay for most users.
If you’re only interested in heatmaps, shinyHeatmap
is the way to go.
Long story short:
shinyHeatmap
if you want barebones logs and to ensure you receive good heatmap visualizations, free of chargeNext, let’s see how you can get started configuring the shinyHeatmap
package.
You already have shinyHeatmap
installed, so now let’s begin with the fun part. For the Shiny dashboard of choice, we’ll reuse the clustering app from our Tools for Monitoring User Adoption article.
Here’s the source code:
library(shiny)
ui <- fluidPage(
headerPanel("Iris k-means clustering"),
sidebarLayout(
sidebarPanel(
selectInput(
inputId = "xcol",
label = "X Variable",
choices = names(iris)
),
selectInput(
inputId = "ycol",
label = "Y Variable",
choices = names(iris),
selected = names(iris)[[2]]
),
numericInput(
inputId = "clusters",
label = "Cluster count",
value = 3,
min = 1,
max = 9
)
),
mainPanel(
plotOutput("plot1")
)
)
)
server <- function(input, output, session) {
selectedData <- reactive({
iris[, c(input$xcol, input$ycol)]
})
clusters <- reactive({
kmeans(selectedData(), input$clusters)
})
output$plot1 <- renderPlot({
palette(c(
"#E41A1C", "#377EB8", "#4DAF4A", "#984EA3",
"#FF7F00", "#FFFF33", "#A65628", "#F781BF", "#999999"
))
par(mar = c(5.1, 4.1, 0, 1))
plot(selectedData(),
col = clusters()$cluster,
pch = 20, cex = 3
)
points(clusters()$centers, pch = 4, cex = 4, lwd = 4)
})
}
shinyApp(ui = ui, server = server)
And here’s what the app looks like:
Image 1 – Clustering R Shiny application
Including shinyHeatmap
is a two-step process:
ui()
– wrap fluidPage()
with a call to with_heatmap()
server()
– add a call to record_heatmap()
to the top.If you want to copy and paste, here’s the code:
library(shiny)
library(shinyHeatmap)
ui <- with_heatmap(
fluidPage(
headerPanel("Iris k-means clustering"),
sidebarLayout(
sidebarPanel(
selectInput(
inputId = "xcol",
label = "X Variable",
choices = names(iris)
),
selectInput(
inputId = "ycol",
label = "Y Variable",
choices = names(iris),
selected = names(iris)[[2]]
),
numericInput(
inputId = "clusters",
label = "Cluster count",
value = 3,
min = 1,
max = 9
)
),
mainPanel(
plotOutput("plot1")
)
)
)
)
server <- function(input, output, session) {
record_heatmap()
selectedData <- reactive({
iris[, c(input$xcol, input$ycol)]
})
clusters <- reactive({
kmeans(selectedData(), input$clusters)
})
output$plot1 <- renderPlot({
palette(c(
"#E41A1C", "#377EB8", "#4DAF4A", "#984EA3",
"#FF7F00", "#FFFF33", "#A65628", "#F781BF", "#999999"
))
par(mar = c(5.1, 4.1, 0, 1))
plot(selectedData(),
col = clusters()$cluster,
pch = 20, cex = 3
)
points(clusters()$centers, pch = 4, cex = 4, lwd = 4)
})
}
shinyApp(ui = ui, server = server)
Once launched, the Shiny dashboard doesn’t look any different from before. We’ve played around with the inputs to make the image somewhat different:
Image 2 – Clustering dashboard after adding shinyHeatmap calls
Click around the dashboard a couple of times. Display different variables on X and Y axes, and tweak the number of clusters. Everything you do will get saved to the www
folder. To be more precise, the events are split by minute, with each minute represented by a single JSON file:
Image 3 – Contents of the www directory
Once opened, a single JSON file looks like this:
Image 4 – Contents of a single JSON log file
But how can you use these logs to visualize the usage heatmap? That’s what we’ll discuss in the following section.
To render a heatmap over the dashboard, you’ll have to replace record_heatmap()
with download_heatmap()
in a call to server()
. We can tweak the output by changing the parameters, but more on that later.
Keep in mind: Because you’ve removed a call to record_heatmap()
, new events aren’t recorded.
Anyhow, here’s the code for the updated server()
function:
server <- function(input, output, session) {
download_heatmap()
selectedData <- reactive({
iris[, c(input$xcol, input$ycol)]
})
clusters <- reactive({
kmeans(selectedData(), input$clusters)
})
output$plot1 <- renderPlot({
palette(c(
"#E41A1C", "#377EB8", "#4DAF4A", "#984EA3",
"#FF7F00", "#FFFF33", "#A65628", "#F781BF", "#999999"
))
par(mar = c(5.1, 4.1, 0, 1))
plot(selectedData(),
col = clusters()$cluster,
pch = 20, cex = 3
)
points(clusters()$centers, pch = 4, cex = 4, lwd = 4)
})
}
Image 5 – Heatmap overlaying the R Shiny app
Neat, isn’t it? You can also click on the Heatmap button to show events in selected time intervals. By default, all logs are aggregated and shown, but you can show a heatmap for a time range only:
Image 6 – Playing around with the Heatmap UI
In case you don’t want the Heatmap button, you can set show_ui = FALSE
in a call to download_heatmap()
. By doing so, the heatmap image will be downloaded:
download_heatmap(show_ui = FALSE)
Image 7 – Downloading the heatmap as an image
The image includes events from all logs, and here’s what it looks like on our end:
Image 8 – Downloaded heatmap image
Is that it? Well, no. You can also modify how the heatmap looks by changing a couple of function parameters.
The download_heatmap()
function accepts an options
parameter. It is a list in which you can tweak the size, opacity, blur, and color of your heatmaps.
Here’s an example – we’ll slightly increase the size and change the color gradient altogether:
download_heatmap(
options = list(
radius = 20,
maxOpacity = 0.8,
minOpacity = 0,
blur = 0.75,
gradient = list(
".5" = "green",
".8" = "red",
".95" = "black"
)
)
)
Here’s what the heatmap looks like:
Image 9 – Heatmap with updated visuals
And that’s how you can install, configure, use, and customize R shinyHeatmap
package in R Shiny. Let’s make a short recap next before you get started with shinyHeatmap for your user testing.
Original article sourced at: https://appsilon.com
1670333760
In this R Shiny post, we will learn about Observable Function in R Shiny. How to Implement a Reactive Observer. It’s easy to get down the basics of R and Shiny, but the learning curve becomes significantly steeper after a point. Reactive values? Reactive events? Reactive observers? Let’s face it – it’s easy to get lost. We’re here to help.
Today you’ll learn all about the Observe function in R Shiny – what it is, what it does, and how to use it in practice with two hands-on examples. We’ll kick things off with a bit of theory, just so you can understand why reactive observers are useful.
The expression passed into the observe()
function is triggered every time one of the inputs changes. If you remember only a single sentence from this article, that should be the one.
So, what makes the observe()
function different from regular reactive expressions? Well, observe()
yields no output and can’t be used as an input to other reactive expressions. Reactive observers are only useful for their “side effects”, such as I/O, triggering a pop-up, and so on.
Is your R Shiny app ready for deployment? Here are 3 ways to share R Shiny apps.
As mentioned before, observers re-execute as soon as their dependencies change, making them use a concept known as eager evaluation. On the other end, reactive expressions are lazy-evaluated, meaning they have to be called by someone else to re-execute.
You can check all the parameters the observe()
function can accept in the official documentation, but we won’t go over that here. Instead, we’ll dive into two hands-on examples.
To demonstrate how the Observe function in R Shiny works, we’ll do something you’d never do in a real life. But there’s a good reason why we’re doing it. It perfectly demonstrates what the observe()
function does, and how a reactive observer works.
We already mentioned that the function is triggered every time one of the inputs changes. So, we can declare input and attach it to an observer to monitor what happens as we mess around with it.
That’s exactly what the code snippet below does. There are two UI elements – sliderInput
and textOutput
. Inside the server()
function, we have attached a reactive observer to the sliderInput
, and each time it changes, we update the text contents of the textOutput
.
There are easier and more intuitive ways to implement the same behavior, but you get the gist – each time the input changes we trigger some code. It’s irrelevant which code we trigger, but in this case, the code just changes the text:
library(shiny)
ui <- fluidPage(
sliderInput(inputId = "slider", label = "Select a number:", min = 1, max = 100, value = 10, step = 1),
textOutput(outputId = "result")
)
server <- function(input, output) {
observe({
input$slider
output$result <- renderText({
input$slider
})
})
}
shinyApp(ui = ui, server = server)
Below you’ll find a GIF demonstrating how the R Shiny app works:
Image 1 – Basic example of an R Shiny reactive observer
That was easy, right? Next, we’ll go over a slightly more complicated example.
All dashboards have dropdown menus, or select inputs, as R Shiny calls them. The premise is simple – you select a value, and then charts/tables on a dashboard are re-drawn.
But how can you introduce dependencies between dropdown menus? In other words, how can you base what’s shown as options for the second dropdown menu based on the value selected from the first menu?
That’s where the Observe function in R Shiny comes in.
If you want to follow along, copy the following dataset and save it as data.csv
:
"x","y"
"A","A1"
"A","A2"
"A","A3"
"A","A4"
"A","A5"
"B","B1"
"B","B2"
"B","B3"
"B","B4"
"B","B5"
"C","C1"
"C","C2"
"C","C3"
"C","C4"
"C","C5"
The example Shiny dashboard below will read the dataset and declare two selectInput
‘s in the UI. The second depends on the first. For example, if the user selects “A” in the first dropdown menu, only options of “A1” to “A5” should be shown in the second menu. The same thing goes for the other letters.
Need help from a Shiny consultant? See how RStudio (Posit) Certified Partners can help you and your team.
Inside the server()
function, we’ll use the dplyr
package to filter out the records we don’t need, and then use the updateSelectInput()
function to “re-write” the choices for the second dropdown menu.
That’s it! Let’s see the code:
library(shiny)
library(dplyr)
dummy_data <- read.csv("data.csv")
ui <- fluidPage(
selectInput(inputId = "col_x", label = "X:", choices = dummy_data$x, selected = dummy_data$x[1]),
selectInput(inputId = "col_y", label = "Y:", choices = dummy_data$y, selected = dummy_data$y[1])
)
server <- function(input, output, session) {
observe({
y_vals <- dummy_data %>% filter(x == input$col_x) %>% select(y)
updateSelectInput(
session = session,
inputId = "col_y",
choices = y_vals,
selected = head(y_vals, 1)
)
})
}
shinyApp(ui = ui, server = server)
Below is a GIF showing you how the app works:
Image 2 – Updating dropdown menu options with the Observe function
Works like a charm. Let’s make a short recap and go over the homework assignment.
Today you’ve learned the basics of the Observe function in R Shiny (reactive observer) both through theory and practical examples. The moral of the story is as follows – use the observe()
function every time you want your Shiny app to react to a change of the input. Common examples are conditionally rendered UI elements, such as options of a dropdown menu.
Original article sourced at: https://appsilon.com
1670297940
In this article, we will learn about PyShiny Demo: R Shiny Developer's Thoughts on Shiny for Python. Appsilon prides itself on building the best Shiny dashboards. So naturally, when RStudio (Posit) announced Shiny for Python (PyShiny) we got very excited. And of course, we had to try it out for ourselves with a Shiny for Python demo. We’ll cover our initial thoughts, but if you want the tl;dr: PyShiny is a great addition for Python fans and you can make a decent dashboard, but it still needs a lot of improvement to be on par with R Shiny. R Shiny developers can sleep peacefully for another 6-12 months.
Did you know RStudio is rebranding to Posit? See why we think that’s a good thing and what it means for the R community.
Python is a great programming language for both Data Science and Software Development. As Shiny (and RStudio) move to combine these two worlds, PyShiny sounds like a perfect match.
After the announcement, Appsilon had a team of three developers test PyShiny. Because we know the R/Shiny realm quite well, it seemed natural to challenge the new framework by trying to port one of our R Shiny apps built during an app sprint – {Respiratory Disease}.
We wanted to see what problems we would face along the way using the alpha version of Shiny for Python. You can explore the result of our experiment and read our commentary on our first impressions below.
Table of Contents:
First, I would like to highlight that Shiny for Python relies on uvicorn web server as its backend. You might have heard of uvicorn before; it powers one of Python’s most popular backend frameworks and in Web Development in general – FastAPI. It is very robust and performant, so there is no doubt that PyShiny applications will be fast and scalable if built right.
As a Software Developer, I noticed a few improvements that we get “for free” from Python itself. Like type hints. It’s not a strongly typed approach like in TypeScript, but it still allows developers to catch some bugs at build-time, rather than run-time.
Not to mention that even advanced R Shiny developers need to open the Shiny reference page from time to time to check what methods and properties exist in the `session` object. In Shiny for Python, you can see that immediately in the IDE.
Speaking of IDE – Visual Studio Code is the recommended tool for writing Shiny for Python applications. Shoutout to Shiny Team for creating a special VSCode extension! This is a great choice since VSCode is a multilingual code editor. It makes it easy to write Python, JavaScript, CSS, etc in one project using a single code editor.
Quarto, Python, and VS Code? Level up your reporting with Quarto Reports in VS Code!
Another cool VSCode feature that is supported in Python and also in Shiny for Python, is the ability to see not only the definition of a user-defined function but also the references – places in the code, where this function is called.
Shiny is a great framework for developing web applications in R because of the ecosystem of related packages: development tools, charts, tables, UI components, etc. At this stage, Shiny for Python lacks such an ecosystem.
While Shiny developers have access to such great component libraries like shiny.semantic or shinyWidgets, in Python everything has to be written from scratch. Of course, we could implement almost anything using JavaScript to directly access the APIs of some popular visualization or component libraries, but this would dissolve the very spirit of Shiny – simplicity, and accessibility for everyone.
The Shiny for Python ecosystem might be lacking, but R is booming. Build better Shiny apps the Appsilon way with Rhino.
Kudos to the Shiny Team for their efforts to leverage the existing ecosystem of ipywidgets with the help of shinywidgets – so that Shiny for Python users/developers have something to work with. However, we tested ipyleaflet which was featured in one of the Shiny for Python demos, and we struggled with it. Its R counterpart (leaflet) feels more flexible, well-documented, and feature-rich.
Implementation of some familiar Shiny features is different in Shiny for Python. For example, shiny.ui.tags.head – it’s not the <head />. This will still work if you need to reference a local JS or CSS file, but in some cases, it’s crucial to add something to the head tag (e.g. the PWA worker). In some cases, things could be broken in unexpected ways: when created inside a UI module, selectize input with multiple selection would not render or work properly.
At the time of code writing, the issue above was already fixed in the development version on Github. But it’s an important reminder that Shiny for Python is still in its infancy.
Last but not least, something’s off with how PyShiny runs the application – starting the app makes the CPU immediately hot. This behavior was observed both on a Linux and a macOS machine.
Let us know in the comments below if you found the same issue.
During the rstudio::conf(2022) RStudio also announced ShinyLive [https://shiny.rstudio.com/py/docs/shinylive.html] – Python code compiled to WebAssembly (WASM) which runs entirely in the browser! It is a new and exciting thing that could become a killer (cool) feature of Shiny for Python.
Why is this exciting? First, WASM allows a serverless architecture (not serverless serverless, but the true absence of a server!). The entire app bundle is downloaded into the client, and everything happens in a browser. Another great feature of WASM applications is that once downloaded, they can be run completely offline. It’s not always straightforward to make a ShinyLive out of a PyShiny app, but we were able to do it!
This technology still has its limitations though. Packages used in such Python projects are limited. For example, we had problems with packages that are not written in pure Python. It’s also noteworthy that the bundle that needs to be downloaded is quite large, which negatively affects the start time of the application.
There is already a solution for R Shiny apps to be downloaded as a PWA. The good news is that we were able to make it work with a PyShiny app as well. It was not trivial though and required some extra effort, but the result is worth it.
We look forward to combining ShinyLive and PWA technologies, to get an offline client app for a mobile device. Think of Shiny Native for all you React fans out there.
The problems described above are not surprising; Afterall, Shiny for Python is still in the alpha stage. The package is being rapidly developed by the RStudio Team and some issues are being fixed on the fly. And as more people test Shiny for Python, we’re bound to see quick progress.
Python Dash or R Shiny? See which you should choose for your use case.
Personally, it’s clear that these problems are only temporary. The ecosystem of packages will eventually grow around Shiny for Python – just as with the early days of Shiny for R. The Appsilon team is excited at the chance to contribute to its success. And we look forward to adding to the PyShiny ecosystem.
In case you didn’t notice, we’ve been using PyShiny and Shiny for Python interchangeably. So what is the official name of RStudio’s Shiny version for Python?
The official name is “Shiny for Python.” However, some folks are already using the unofficial, condensed version: Py Shiny or PyShiny.
What’s the official name of R Shiny? The official name for RStudio’s Shiny for R is “Shiny.” However, this to began to change with the community (somewhat clairvoyantly) dubbing it R Shiny or R/Shiny.
Want to get started with Shiny for Python? Check out our tutorial introducing PyShiny.
Moving forward, these unofficial names might actually be easier to distinguish in conversation which language is being used for a given project. What do you think?
Original article sourced at: https://appsilon.com
1670090400
In this article, we will learn what is Observer Function in R Shiny / Reactive Observer. It’s easy to get down the basics of R and Shiny, but the learning curve becomes significantly steeper after a point. Reactive values? Reactive events? Reactive observers? Let’s face it – it’s easy to get lost. We’re here to help.
Today you’ll learn all about the Observe function in R Shiny – what it is, what it does, and how to use it in practice with two hands-on examples. We’ll kick things off with a bit of theory, just so you can understand why reactive observers are useful.
The expression passed into the observe()
function is triggered every time one of the inputs changes. If you remember only a single sentence from this article, that should be the one.
So, what makes the observe()
function different from regular reactive expressions? Well, observe()
yields no output and can’t be used as an input to other reactive expressions. Reactive observers are only useful for their “side effects”, such as I/O, triggering a pop-up, and so on.
Is your R Shiny app ready for deployment? Here are 3 ways to share R Shiny apps.
As mentioned before, observers re-execute as soon as their dependencies change, making them use a concept known as eager evaluation. On the other end, reactive expressions are lazy-evaluated, meaning they have to be called by someone else to re-execute.
You can check all the parameters the observe()
function can accept in the official documentation, but we won’t go over that here. Instead, we’ll dive into two hands-on examples.
To demonstrate how the Observe function in R Shiny works, we’ll do something you’d never do in a real life. But there’s a good reason why we’re doing it. It perfectly demonstrates what the observe()
function does, and how a reactive observer works.
We already mentioned that the function is triggered every time one of the inputs changes. So, we can declare input and attach it to an observer to monitor what happens as we mess around with it.
That’s exactly what the code snippet below does. There are two UI elements – sliderInput
and textOutput
. Inside the server()
function, we have attached a reactive observer to the sliderInput
, and each time it changes, we update the text contents of the textOutput
.
There are easier and more intuitive ways to implement the same behavior, but you get the gist – each time the input changes we trigger some code. It’s irrelevant which code we trigger, but in this case, the code just changes the text:
library(shiny)
ui <- fluidPage(
sliderInput(inputId = "slider", label = "Select a number:", min = 1, max = 100, value = 10, step = 1),
textOutput(outputId = "result")
)
server <- function(input, output) {
observe({
input$slider
output$result <- renderText({
input$slider
})
})
}
shinyApp(ui = ui, server = server)
Below you’ll find a GIF demonstrating how the R Shiny app works:
Image 1 – Basic example of an R Shiny reactive observer
That was easy, right? Next, we’ll go over a slightly more complicated example.
All dashboards have dropdown menus, or select inputs, as R Shiny calls them. The premise is simple – you select a value, and then charts/tables on a dashboard are re-drawn.
But how can you introduce dependencies between dropdown menus? In other words, how can you base what’s shown as options for the second dropdown menu based on the value selected from the first menu?
That’s where the Observe function in R Shiny comes in.
If you want to follow along, copy the following dataset and save it as data.csv
:
"x","y"
"A","A1"
"A","A2"
"A","A3"
"A","A4"
"A","A5"
"B","B1"
"B","B2"
"B","B3"
"B","B4"
"B","B5"
"C","C1"
"C","C2"
"C","C3"
"C","C4"
"C","C5"
The example Shiny dashboard below will read the dataset and declare two selectInput
‘s in the UI. The second depends on the first. For example, if the user selects “A” in the first dropdown menu, only options of “A1” to “A5” should be shown in the second menu. The same thing goes for the other letters.
Need help from a Shiny consultant? See how RStudio (Posit) Certified Partners can help you and your team.
Inside the server()
function, we’ll use the dplyr
package to filter out the records we don’t need, and then use the updateSelectInput()
function to “re-write” the choices for the second dropdown menu.
That’s it! Let’s see the code:
library(shiny)
library(dplyr)
dummy_data <- read.csv("data.csv")
ui <- fluidPage(
selectInput(inputId = "col_x", label = "X:", choices = dummy_data$x, selected = dummy_data$x[1]),
selectInput(inputId = "col_y", label = "Y:", choices = dummy_data$y, selected = dummy_data$y[1])
)
server <- function(input, output, session) {
observe({
y_vals <- dummy_data %>% filter(x == input$col_x) %>% select(y)
updateSelectInput(
session = session,
inputId = "col_y",
choices = y_vals,
selected = head(y_vals, 1)
)
})
}
shinyApp(ui = ui, server = server)
Below is a GIF showing you how the app works:
Image 2 – Updating dropdown menu options with the Observe function
Works like a charm. Let’s make a short recap and go over the homework assignment.
Today you’ve learned the basics of the Observe function in R Shiny (reactive observer) both through theory and practical examples. The moral of the story is as follows – use the observe()
function every time you want your Shiny app to react to a change of the input. Common examples are conditionally rendered UI elements, such as options of a dropdown menu.
Original article sourced at: https://appsilon.com
1669916160
In this article, we will learn about Matplotlib vs. ggplot: How to Use Both in R Shiny Apps. Data Science has (unnecessarily) divided the world into two halves – R users and Python users. Irrelevant of the group you belong to, there’s one thing you have to admit – each language individually has libraries far superior to anything available in the alternative. For example, R Shiny is much easier for beginners than anything Python offers. But what about basic data visualization? That’s where this Matplotlib vs. ggplot article comes in.
Today we’ll see how R and Python compare in basic data visualization. We’ll compare their standard plotting libraries – Matplotlib and ggplot to see which one is easier to use and which looks better at the end. We’ll also show you how to include Matplotlib charts in R Shiny dashboards, as that’s been a common pain point for Python users. What’s even better, the chart will react to user input.
There’s no denying that both Matplotlib and ggplot don’t look the best by default. There’s a lot you can change, of course, but we’ll get to that later. The aim of this section is to compare Matplotlib and ggplot in the realm of unstyled visualizations.
To keep things simple, we’ll only make a scatter plot of the well-known mtcars
dataset, in which X-axis shows miles per gallon and Y-axis shows the corresponding horsepower.
Are you new to scatter plots? Here’s our complete guide to get you started.
There’s not a lot you have to do to produce this visualization in R ggplot:
library(ggplot2)
ggplot(data = mtcars, aes(x = mpg, y = hp)) +
geom_point()
Image 1 – Basic ggplot scatter plot
It’s a bit dull by default, but is Matplotlib better?
The mtcars
dataset isn’t included in Python, so we have to download and parse the dataset from GitHub. After doing so, a simple call to ax.scatter()
puts both variables on their respective axes:
import pandas as pd
import matplotlib.pyplot as plt
mtcars = pd.read_csv("https://gist.githubusercontent.com/ZeccaLehn/4e06d2575eb9589dbe8c365d61cb056c/raw/898a40b035f7c951579041aecbfb2149331fa9f6/mtcars.csv", index_col=[0])
fig, ax = plt.subplots(figsize=(13, 8))
ax.scatter(x=mtcars["mpg"], y=mtcars["hp"])
Image 2 – Basic matplotlib scatter plot
It would be unfair to call ggplot superior to Matplotlib, for the pure fact that the dataset comes included with R. Python requires an extra step.
From the visual point of view, things are highly subjective. Matplotlib figures have a lower resolution by default, so the whole thing looks blurry. Other than that, declaring a winner is near impossible.
Do you prefer Matplotlib or ggplot2 default stylings? Let us know in the comment section below.
Let’s add some styles to see which one is easier to customize.
To keep things simple, we’ll modify only a couple of things:
qsec
variablecyl
variableIn R ggplot, that boils down to adding a couple of lines of code:
ggplot(data = mtcars, aes(x = mpg, y = hp)) +
geom_point(aes(size = qsec, color = factor(cyl))) +
scale_color_manual(values = c("#3C6E71", "#70AE6E", "#BEEE62")) +
theme_classic() +
theme(legend.position = "none") +
labs(title = "Miles per Gallon vs. Horse Power")
Image 3 – Customized ggplot scatter plot
The chart now actually looks usable, both for reporting and dashboarding purposes.
But how difficult it is to produce the same chart in Python? Let’s take a look. For starters, we’ll increase the DPI to get rid of the blurriness, and also remove the top and right lines around the figure.
Changing point size and color is a bit trickier to do in Matplotlib, but it’s just a matter of experience and preference. Also, Matplotlib doesn’t place labels on axes by default – consider this as a pro or a con. We’ll add them manually:
plt.rcParams["figure.dpi"] = 300
plt.rcParams["axes.spines.top"] = False
plt.rcParams["axes.spines.right"] = False
fig, ax = plt.subplots(figsize=(13, 8))
ax.scatter(
x=mtcars["mpg"],
y=mtcars["hp"],
s=[s**1.8 for s in mtcars["qsec"].to_numpy()],
c=["#3C6E71" if cyl == 4 else "#70AE6E" if cyl == 6 else "#BEEE62" for cyl in mtcars["cyl"].to_numpy()]
)
ax.set_title("Miles per Gallon vs. Horse Power", size=18, loc="left")
ax.set_xlabel("mpg", size=14)
ax.set_ylabel("hp", size=14)
Image 4 – Customized matplotlib scatter plot
The figures look almost identical, so what’s the verdict? Is it better to use Python’s Matplotlib or R’s ggplot2?
Objectively speaking, Python’s Matplotlib requires more code to do the same thing when compared to R’s ggplot2. Further, Python’s code is harder to read, due to bracket notation for variable access and inline conditional statements.
So, does ggplot2 take the win here? Well, no. If you’re a Python user it will take you less time to create a chart in Matplotlib than it would to learn a whole new language/library. The same goes the other way.
Up next, we’ll see how easy it is to include this chart in an interactive dashboard.
Shiny is an R package for creating dashboards around your data. It’s built for R programming language, and hence integrates nicely with most of the other R packages – ggplot2 included.
We’ll now create a simple R Shiny dashboard that allows you to select columns for the X and Y axis and then updates the figure automatically. If you have more than 30 minutes of R Shiny experience, the code snippet below shouldn’t be difficult to read:
library(shiny)
library(ggplot2)
ui <- fluidPage(
tags$h3("Scatter plot generator"),
selectInput(inputId = "x", label = "X Axis", choices = names(mtcars), selected = "mpg"),
selectInput(inputId = "y", label = "Y Axis", choices = names(mtcars), selected = "hp"),
plotOutput(outputId = "scatterPlot")
)
server <- function(input, output, session) {
data <- reactive({mtcars})
output$scatterPlot <- renderPlot({
ggplot(data = data(), aes_string(x = input$x, y = input$y)) +
geom_point(aes(size = qsec, color = factor(cyl))) +
scale_color_manual(values = c("#3C6E71", "#70AE6E", "#BEEE62")) +
theme_classic() +
theme(legend.position = "none")
})
}
shinyApp(ui = ui, server = server)
Image 5 – Shiny dashboard rendering a ggplot chart
Put simply, we’re rerendering the chart every time one of the inputs changes. All computations are done in R, and the update is almost instant. Makes sense, since mtcars
is a tiny dataset.
But how about rendering a Matplotlib chart in R Shiny? Let’s see if it’s even possible.
There are several ways to combine R and Python – reticulate being one of them. However, we won’t use that kind of bridging library today.
Instead, we’ll opt for a simpler solution – calling a Python script from R. The mentioned Python script will be responsible for saving a Matplotlib figure in JPG form. In Shiny, the image will be rendered with the renderImage()
reactive function.
Let’s write the script – generate_scatter_plot.py
. It leverages the argparse
module to accept arguments when executed from the command line. As you would expect, the script accepts column names for the X and Y axis as command line arguments. The rest of the script should feel familiar, as we explored it in the previous section:
import argparse
import pandas as pd
import matplotlib.pyplot as plt
# Tweak matplotlib defaults
plt.rcParams["figure.dpi"] = 300
plt.rcParams["axes.spines.top"] = False
plt.rcParams["axes.spines.right"] = False
# Get and parse the arguments from the command line
parser = argparse.ArgumentParser()
parser.add_argument("--x", help="X-axis column name", type=str, required=True)
parser.add_argument("--y", help="Y-axis column name", type=str, required=True)
args = parser.parse_args()
# Fetch the dataset
mtcars = pd.read_csv("https://gist.githubusercontent.com/ZeccaLehn/4e06d2575eb9589dbe8c365d61cb056c/raw/898a40b035f7c951579041aecbfb2149331fa9f6/mtcars.csv", index_col=[0])
# Create the plot
fig, ax = plt.subplots(figsize=(13, 7))
ax.scatter(
x=mtcars[args.x],
y=mtcars[args.y],
s=[s**1.8 for s in mtcars["qsec"].to_numpy()],
c=["#3C6E71" if cyl == 4 else "#70AE6E" if cyl == 6 else "#BEEE62" for cyl in mtcars["cyl"].to_numpy()]
)
# Save the figure
fig.savefig("scatterplot.jpg", bbox_inches="tight")
You can run the script from the command line for verification:
Image 6 – Running a Python script for chart generation
If all went well, it should have saved a scatterplot.jpg
to disk:
Image 7 – Scatter plot generated by Python and matplotlib
Everything looks as it should, but what’s the procedure in R Shiny? Here’s a list of things we have to do:
plotOutput()
with imageOutput()
– we’re rendering an image afterallgenerate_scatter_plot.py
file and pass in the command line arguments gathered from the currently selected dropdown valuesrenderImage()
reactive function to execute the shell command and load in the imageIt sounds like a lot, but it doesn’t require much more code than the previous R example. Just remember to specify a full path to the Python executable when constructing a shell command.
Here’s the entire code snippet:
library(shiny)
ui <- fluidPage(
tags$head(
tags$style(HTML("
#scatterPlot > img {
max-width: 800px;
}
"))
),
tags$h3("Scatter plot generator"),
selectInput(inputId = "x", label = "X Axis", choices = names(mtcars), selected = "mpg"),
selectInput(inputId = "y", label = "Y Axis", choices = names(mtcars), selected = "hp"),
imageOutput(outputId = "scatterPlot")
)
server <- function(input, output, session) {
# Construct a shell command to run Python script from the user input
shell_command <- reactive({
paste0("/Users/dradecic/miniforge3/bin/python generate_scatter_plot.py --x ", input$x, " --y ", input$y)
})
# Render the matplotlib plot as an image
output$scatterPlot <- renderImage({
# Run the shell command to generate image - saved as "scatterplot.jpg"
system(shell_command())
# Show the image
list(src = "scatterplot.jpg")
})
}
Image 8 – Shiny dashboard rendering a matplotlib chart
The dashboard takes some extra time to rerender the chart, which is expected. After all, R needs to call a Python script which then constructs and saves the chart to the disk. It’s an extra step, so the refresh isn’t as instant as with ggplot2.
To conclude, you can definitely use Python’s Matplotlib library in R Shiny dashboards. There are a couple of extra steps involved, but nothing you can’t manage. If you’re a heavy Python user and want to try R Shiny, this could be the fastest way to get started.
What do you think of Matplotlib in R Shiny? What do you generally prefer – Matplotlib or ggplot2? Please let us know in the comment section below. Also, don’t hesitate to reach out on Twitter if you use another approach to render Matplotlib charts in Shiny – @appsilon. We’d love to hear your comments.
Original article sourced at: https://appsilon.com
1669844640
In this tutorial, we will learn Supply Chain Management Strategy with R and Shiny. During the pandemic, the supply chain and its management strategy burst into the spotlight. Supply chain management (SCM) became a household topic as its disruptions began to directly impact people’s lives and the global economy.
But the truth is, good supply chain strategies are the ones that are aligned with business strategy. And in this complex environment, there is a trade-off between responsive versus efficient strategies.
A spreadsheet solution enables a lot of flexibility but it comes at the expense of reproducibility and error pruning. ERP is a very mature system and has strict rules that make it reliable and efficient but adds little opportunity for newer designs.
This is an extreme example, but it’s a valid one:
Where IT products are not flexible enough nor have the required development speed for business needs, having spreadsheet solutions where an IT product should be used is a common source of strategy misalignment. And one that causes visibility issues in the supply chain.
This is where R comes into play. It’s great for solutions that require a certain degree of development speed and flexible design but are also reproducible with the building blocks of IT maturity. Knowing the requirements that allow an R solution to shine also depends on a proper understanding of where you stand with IT requirements and the business environment.
In this post, we depict the general understanding of business strategy and supply chain as a subsidy to strategic alignment and why using R sustains the visibility and reproducibility required for SCM strategy.
We’ll also show use cases for R and Shiny in SCM and how to add additional value.
Porter has a great definition of strategy: “the creation of fit between activities, where each activity is consistent, mutually reinforcing, and the fit is done optimally for competitive advantage.”
This fit can have different perspectives such as:
Also, Shapiro and Heskett state that strategy has a set of dichotomies that creates tension on each perspective, therefore, strategy decisions will always inherit a set of trade-offs.
This is an extensive field, and for this post, this is an essential concept. If you want to go further, I highly recommend Wharton’s free Strategic Management program.
The supply chain is a big topic and can be explained through a variety of viewpoints. Below is a condensed summary of the SCOR Model and other descriptions.
An additional summation:
A strategic fit occurs when the competitive strategy and supply chain strategy align goals. Its success is connected to several factors:
A company can fail because of a lack of strategic fit or because the overall design, supply chain processes, and resources do not provide enough to support the desired strategic outcome.
There are three basic steps to achieve this and overcome potential failure:
In summary, these steps ensure that there is an appropriate supply chain strategy for each product/service. And functional products have an efficient supply chain strategy while innovative products should fit with a responsive supply chain design. This is called the zone of strategic fit.
It is important to note that products have a lifecycle, and the supply chain should account for that. Also, the real world is a dynamic system. That’s why alignment with the business strategy is important. It ensures the correct approach under each state of change.
Being able to understand what the drivers of change are and leverage them is what makes supply chain a field of both science and art. This means your supply chain strategy should leverage the theory, but also be fine-tuned for unique situations.
Therefore, using analytic solutions that can leverage both real data and theory into actionable insights provides huge value for supply chain strategists. Reproducibility is a key factor of success because it enhances visibility and enforces alignment between the scope of strategy and operations, as well as maintains sustainability.
For further studies on the supply chain, audit the free MITx program.
When we’re abstracting a model or testing a specific problem, using spreadsheets seems a great way to get the work done. Because it is at its core, flexible. But, in this situation, it’s can be difficult to explain the solution to others and make sure that the appropriate workflow is being followed.
Another drawback of spreadsheets is complex logic. When it comes to more complex problems it becomes harder to set the appropriate logic to design the solution in a steady state. Whereas in R, you can leverage a set of best practices and cutting-edge solutions from packages that are maintained by CRAN. This extends the level of quality that R grants to its users.
Note: CRAN is not without its risks. To ensure your project remains secure, you should explore the Isoband Incident and how to mitigate risks.
Let’s take a look at an example by comparing the two solutions: spreadsheets vs R.
For each of the metrics in the data, you are asked to calculate the change in the sum of all countries in the ongoing previous month and previous year against the current one:
The concept is straightforward, we basically have to:
To do this in spreadsheets, there are many approaches you could work on. One such example:
In this example, we can see that on the 1st of January 2016 56 items were produced while on the 1st of December 2015 it produced 80 items. This is a change in production by -24 items. The same logic applies to the year metric.
The data transformation that was asked for is complete. So what’s the issue then?
Basically, this approach adds too many manual insertions and each calculation must be checked and updated by the user. There is also the risk of mixing different cells in the calculation, this is the reproducibility problem.
Let’s not forget that we also need 6 other metrics on the data; the same approach is to be repeated 6 times in the spreadsheet.
Imagine for instance that data changes or another metric should be included later on. You will soon enter the spreadsheet productivity dilemma. It was fast to design, became hard to maintain, and now is harder to keep adding features. Soon enough, most of your daily work will revolve around spreadsheet issues instead of focusing on business value.
For the business strategy, this is also a problem. Because it compromises the overall strategy, especially in the supply chain regarding information flow. This issue, therefore, impacts the physical and financial flow. It also makes it harder for businesses to quickly detect changes in the supply chain.
This example is a rather common issue in business. Even beautifully designed dashboards sometimes source an entangled web of spreadsheet data transformations that are connected in non obvious ways and are very hard to understand the data pipeline.
This same problem can be solved in R, in a very elegant solution provided by the tidyverse package:
cols <- c(
"produced_items", "orders_count", "revenue",
"cost", "salvage_value", "profit", "complaints_opened",
"complaints_closed", "users_active", "users_dropped_out"
)
daily_stats <- dataset_df %>%
group_by(date) %>%
summarise(across(all_of(cols), sum, .names = "{col}")) %>%
mutate(
across(
all_of(cols),
list(
prev_month = ~ lag(.x, n = 30),
change_prev_month = ~ .x - lag(.x, n = 30),
prev_year = ~ lag(.x, n = 365),
change_prev_year = ~ .x - lag(.x, n = 365)
),
.names = "{col}.{fn}"
)
)
In this example, we can see that all the required steps for this data transformation are kept in the code. This means that understanding and debugging the applications is much easier and faster. And if anything changes in the source, it has a steady-state structure that can be easily updated.
But R does not only allow you to make reproducible pipelines for dashboards as in the example above. It also allows you to create beautiful dashboards to share this data in a more consumable fashion. With R, you can create a new set of value by designing apps for specific problems – all without needing the skills of a web developer.
We’ll show you how with a solution for the use case scenario below.
Let’s start by presenting a very traditional supply chain design problem as the multiple-commodity transshipment problem.
In this problem, you minimize the total cost of fulfilling the demand for a set of products at each point of sales, while sharing capacity constraints on plants and distribution centers.
For this problem, we have a template-ready dataset from a spreadsheet, this will have the following structure:
path <- "Transshipment_template.xlsx"
data <- path %>%
readxl::excel_sheets() %>%
purrr::set_names() %>%
purrr::map(readxl::read_excel, path = path)
To run the model, we must set it to the appropriate structure of R for optimization, this requires a set of data wrangling:
transship_wrang <- function(data){
Product <- dplyr::filter(data$Nodes, Entity == "Product")$Name
Plant <- dplyr::filter(data$Nodes, Entity == "Plant")$Name
DC <- dplyr::filter(data$Nodes, Entity == "DC")$Name
Region <- dplyr::filter(data$Nodes, Entity == "Region")$Name
incost <- dplyr::filter(data$flow_cost, type == "inflow")
incst <- array(
as.matrix(incost$value),
dim = c(length(Product), length(Plant), length(DC)),
dimnames = list(Product, Plant, DC)
)
outcost <- dplyr::filter(data$flow_cost, type == "outflow")
outcst <- array(
as.matrix(outcost$value),
dim = c(length(Product), length(DC),length(Region)),
dimnames = list(Product, DC, Region)
)
PlCapacity <- matrix(
dplyr::filter(data$Capacity, Node == "Plant")$Value,
ncol = 1,
dimnames = list(Plant, "PlCapacity")
)
DCCapacity <- matrix(
dplyr::filter(data$Capacity, Node == "DC")$Value,
ncol = 1,
dimnames = list(DC, "DCCapacity")
)
PlPrCapacity <- array(
dplyr::filter(data$NodeARCs, Type == "Restriction")$Value,
dim = c(length(Plant),length(Product)),
dimnames = list(Plant, Product)
)
Demand <- array(
dplyr::filter(data$NodeARCs, Type == "Demand")$Value,
dim = c(length(Region),length(Product)),
dimnames = list(Region, Product)
)
return(
list(
Product = Product,
Plant = Plant,
DC = DC,
Region = Region,
incst = incst,
outcst = outcst,
PlPrCapacity = PlPrCapacity,
PlCapacity = PlCapacity,
DCCapacity = DCCapacity,
Demand = Demand
)
)
}
clean_data <- transship_wrang(data)
To set this model, we’ll use ompr since it relates to the mathematical formulation of MILP models, this eases the code x model barrier:
transship_model <- function(
Product, Plant, DC, Region, incst, outcst,
PlPrCapacity, PlCapacity, DCCapacity, Demand) {
require(ROI)
require(ROI.plugin.glpk)
l <- length(Product) # Number of Products
i <- length(Plant) #Number of Plants
k <- length(DC) #Number of transhipments (CDs)
j <- length(Region) #Number of cities (POS)
model <- ompr::MIPModel() %>%
# Variable of inflow
ompr::add_variable(xinf[l,i,k], l = 1:l, i = 1:i, k=1:k, type = "integer", lb = 0) %>%
# Variable of outflow
ompr::add_variable(xout[l,k,j], l = 1:l, k = 1:k, j=1:j, type = "integer", lb = 0) %>%
ompr::set_objective(
ompr::sum_expr(xinf[l,i,k] * incst[l,i,k], l = 1:l, i = 1:i, k=1:k) + #Inbound Cost
ompr::sum_expr(xout[l,k,j] * outcst[l,k,j], l = 1:l, k = 1:k, j=1:j) #Outbound Cost
) %>%
#Plant Production Capacity
ompr::add_constraint(ompr::sum_expr(xinf[l,i,k], k=1:k) <= PlPrCapacity[i,l], l=1:l, i=1:i) %>%
#Plant Total Capacity
ompr::add_constraint(ompr::sum_expr(xinf[l,i,k], l=1:l, k=1:k) <= PlCapacity[i], i=1:i) %>%
#DC Total Capacity
ompr::add_constraint(ompr::sum_expr(xinf[l,i,k], l=1:l, i=1:i) <= DCCapacity[k], k=1:k) %>%
#Fulfill Demand
ompr::add_constraint(ompr::sum_expr(xout[l,k,j], k=1:k) >= Demand[j,l], l=1:l, j=1:j) %>%
#Flow Constraint
ompr::add_constraint(
ompr::sum_expr(xinf[l,i,k], i=1:i) == ompr::sum_expr(xout[l,k,j], j=1:j),
l=1:l, k=1:k
)
#Solve
result <- ompr::solve_model(model, ompr.roi::with_ROI(solver = "glpk"))
# Results
objective <- result$objective_value
Infl <- ompr::get_solution(result, xinf[l,i,k]) %>%
dplyr::mutate(product = Product[l], source = Plant[i], destiny = DC[k], type = "Inflow") %>%
dplyr::select(type, product, source, destiny, value)
Outfl <- ompr::get_solution(result, xout[l,k,j]) %>%
dplyr::mutate(product = Product[l], source = DC[k], destiny = Region[j], type = "Outflow") %>%
dplyr::select(type, product, source, destiny, value)
Dcs_Flow <- Infl %>%
dplyr::group_by(destiny, product) %>%
dplyr::summarise(Amount = sum(value)) %>%
as.data.frame()
Plants_Product <- Infl %>%
dplyr::group_by(source, product) %>%
dplyr::summarise(Amount = sum(value)) %>%
as.data.frame()
Products_flow <- rbind(Infl, Outfl)
return(
list(
objective = objective,
inflow = Infl,
outflow = Outfl,
Products_flow = Products_flow,
Dcs_Flow = Dcs_Flow,
Plants_Product = Plants_Product
)
)
}
model <- transship_model(
Product = clean_data$Product,
Plant = clean_data$Plant,
DC = clean_data$DC,
Region = clean_data$Region,
incst = clean_data$incst,
outcst = clean_data$outcst,
PlPrCapacity = clean_data$PlPrCapacity,
PlCapacity = clean_data$PlCapacity,
DCCapacity = clean_data$DCCapacity,
Demand = clean_data$Demand
)
bootstrap <- c("striped", "hover", "responsive") glue::glue("Total Cost: {model$objective}") knitr::kable(model$Plants_Product, caption = "Plants Production") %>%
kableExtra::kable_styling(bootstrap_options = bootstrap, full_width = F, font_size = 20, position = "float_left")
knitr::kable(model$Dcs_Flow, caption = "DCs Flow") %>%
kableExtra::kable_styling(bootstrap_options = bootstrap, full_width = F, font_size = 20, position = "right")
knitr::kable(model$inflow, caption = "Inflow") %>%
kableExtra::kable_styling(bootstrap_options = bootstrap, full_width = F, font_size = 20, position = "float_left")
knitr::kable(model$outflow, caption = "Outflow") %>%
kableExtra::kable_styling(bootstrap_options = bootstrap, full_width = F, font_size = 20, position = "right")
Total Cost: 9250
You and your team developed a great model and now have valuable insight for the company. How do you share this info with your peers? And can you make this insight interactive, letting them tweak values or set new input data?
You can do this and more by using R Shiny – an interactive web framework for R (and Python).
Are you more of a Python fan? See what’s currently possible with our Shiny for Python demo.
Continue below for the full code to build your own Shiny application for your SCM model.
sankey_chart <- function(data, product) { data %>%
dplyr::filter(product == !!product) %>%
echarts4r::e_charts() %>%
echarts4r::e_sankey(source, destiny, value) %>%
echarts4r::e_title(glue::glue("Product {product} flow")) %>%
echarts4r::e_tooltip() %>%
echarts4r::e_theme("dark")
}
reactablefmtr <- function(data, args = TRUE, ...) { data %>%
dplyr::select(...) %>%
reactable::reactable(.,
filterable = args, searchable = args, resizable = args,
onClick = "select", outlined = TRUE, bordered = TRUE, borderless = TRUE,
striped = args, highlight = TRUE, compact = args, showSortable = TRUE,
theme = reactablefmtr::slate()
)
}
ui <- bs4Dash::dashboardPage(
ui <- bs4Dash::dashboardPage(
title = "Trasshipment Model",
fullscreen = TRUE,
dark = T,
scrollToTop = T,
header = bs4Dash::dashboardHeader(
status = "gray-dark",
title = bs4Dash::dashboardBrand(
title = "Transshipment Model",
color = "primary"
)
),
sidebar = bs4Dash::dashboardSidebar(
collapsed = T,
bs4Dash::sidebarMenu(
bs4Dash::menuItem(
text = "Transshipment",
tabName = "transshipment",
icon = icon("project-diagram")
)
)
),
footer = bs4Dash::dashboardFooter(
right = a(
href = "https://appsilon.com/",
"Built with ❤ by Appsilon"
),
left = div(
icon("calendar"),
Sys.Date()
),
fixed = T
),
body = bs4Dash::dashboardBody(
bs4Dash::tabItems(
bs4Dash::tabItem(
tabName = "transshipment",
bs4Dash::tabBox(
width = 12,
collapsible = FALSE,
maximizable = TRUE,
tabPanel(
"Model",
fluidRow(
column(
4,
bs4Dash::box(
status = "purple",
collapsible = F,
width = 12,
div(
class = "d-flex justify-content-center",
a(
tags$i(class = "fa fa-database"),
href = "www/files/Transshipment_template.xlsx",
"Download template",
class = "btn btn-default m-1",
download = NA, target = "_blank"
)
),
hr(),
fileInput("uploadmodel", "Upload Data"),
hr(),
div(
class = "d-flex justify-content-center",
shiny::actionButton(
inputId = "model_run",
class = "btn btn-success action-button m-1 shiny-bound-input",
icon = icon("magic"),
label = "Run Model"
)
)
)
),
column(
8,
bs4Dash::tabBox(
width = 12,
collapsible = T,
maximizable = T,
collapsed = F,
tabPanel(
"Model Info",
div(
fluidRow(
bs4Dash::bs4ValueBoxOutput("products", width = 3),
bs4Dash::bs4ValueBoxOutput("plants", width = 3),
bs4Dash::bs4ValueBoxOutput("dcs", width = 3),
bs4Dash::bs4ValueBoxOutput("regions", width = 3)
),
hr(),
selectInput(
"url_db",
label = h5("Choose the data"),
choices = c("Nodes", "flow_cost", "Capacity", "NodeARCs")
),
shinycssloaders::withSpinner(
reactable::reactableOutput("data"),
type = 8
)
)
),
tabPanel(
"Results",
uiOutput("ui_output")
)
)
)
)
)
)
)
)
)
)
server <- function(input, output) {
data <- reactive({
path <- input$uploadmodel$datapath path %>%
readxl::excel_sheets() %>%
purrr::set_names() %>%
purrr::map(readxl::read_excel, path = path)
})
data_info <- reactive(
transship_wrang(data())
)
output$products <- bs4Dash::renderbs4ValueBox({
req(input$uploadmodel$datapath)
bs4Dash::bs4ValueBox(length(data_info()$Product), subtitle = "Products", color = "primary")
})
output$plants <- bs4Dash::renderbs4ValueBox({
req(input$uploadmodel$datapath)
bs4Dash::bs4ValueBox(length(data_info()$Plant), subtitle = "Plants", color = "primary")
})
output$dcs <- bs4Dash::renderbs4ValueBox({
req(input$uploadmodel$datapath)
bs4Dash::bs4ValueBox(length(data_info()$DC), subtitle = "DCs", color = "primary")
})
output$regions <- bs4Dash::renderbs4ValueBox({
req(input$uploadmodel$datapath)
bs4Dash::bs4ValueBox(length(data_info()$Region), subtitle = "Regions", color = "primary")
})
output$data <- reactable::renderReactable({
req(input$uploadmodel$datapath)
reactablefmtr(data()[input$url_db][[1]], args = FALSE, everything())
})
observeEvent(input$model_run, {
if (is.null(input$uploadmodel$datapath)) {
shinyWidgets::sendSweetAlert(
title = "Upload a file",
type = "error",
text = "Please, Upload a file first"
)
} else {
model <- transship_model(
Product = data_info()$Product,
Plant = data_info()$Plant,
DC = data_info()$DC,
Region = data_info()$Region,
incst = data_info()$incst,
outcst = data_info()$outcst,
PlPrCapacity = data_info()$PlPrCapacity,
PlCapacity = data_info()$PlCapacity,
DCCapacity = data_info()$DCCapacity,
Demand = data_info()$Demand
)
output$total_value <- renderText({
glue::glue("Total Cost: {model$objective}")
})
output$sankey_chart <- echarts4r::renderEcharts4r({
sankey_chart(model$Products_flow, input$product)
})
output$results_data <- reactable::renderReactable({
reactablefmtr(model[input$result_data][[1]], args = FALSE, everything())
})
output$ui_output <- renderUI({
tagList(
div(
class = "d-flex justify-content-center",
h2(textOutput("total_value"))
),
selectInput(
"product",
label = h5("Choose Product"),
choices = data_info()$Product
),
echarts4r::echarts4rOutput("sankey_chart"),
selectInput(
"result_data",
label = h5("Choose view"),
choices = c("inflow", "outflow", "Dcs_Flow", "Plants_Product")
),
reactable::reactableOutput("results_data")
)
})
}
})
}
shinyApp(ui = ui, server = server)
Supply chain management has many layers of solutions and models designed for each scope of the business strategy. The intrinsic trade-off between those strategies requires visibility, and by being reproducible, R is capable of delivering value to each step in the strategy spectrum.
There are solutions where a spreadsheet definitely shines, especially while designing an idea from scratch. But with R you can extend this to a business-friendly solution in a production-ready state without compromising the flexibility you require for your business life cycle.
There are many other topics in supply chain and applications that can be used with R, but the main insight for this post is that reproducibility is a key factor for success in the alignment between the scope of strategies. This is vital for businesses and makes a significant difference in sustainable and successful solutions.
Original article sourced at: https://appsilon.com
1668089640
Bootstrap 4 shinydashboard using AdminLTE3
Taking the simple {shinydashboard}
example:
library(shiny)
library(shinydashboard)
ui <- dashboardPage(
dashboardHeader(title = "Basic dashboard"),
dashboardSidebar(),
dashboardBody(
# Boxes need to be put in a row (or column)
fluidRow(
box(plotOutput("plot1", height = 250)),
box(
title = "Controls",
sliderInput("slider", "Number of observations:", 1, 100, 50)
)
)
)
)
server <- function(input, output) {
set.seed(122)
histdata <- rnorm(500)
output$plot1 <- renderPlot({
data <- histdata[seq_len(input$slider)]
hist(data)
})
}
shinyApp(ui, server)
Starting from v2.0.0, moving to {bs4Dash}
is rather simple:
library(bs4Dash)
ui <- dashboardPage(
dashboardHeader(title = "Basic dashboard"),
dashboardSidebar(),
dashboardBody(
# Boxes need to be put in a row (or column)
fluidRow(
box(plotOutput("plot1", height = 250)),
box(
title = "Controls",
sliderInput("slider", "Number of observations:", 1, 100, 50)
)
)
)
)
server <- function(input, output) {
set.seed(122)
histdata <- rnorm(500)
output$plot1 <- renderPlot({
data <- histdata[seq_len(input$slider)]
hist(data)
})
}
shinyApp(ui, server)
{bs4Dash}
is undergoing major rework to make it easier to come from {shinydashboard}
. The current development version 2.0.0 provides a 1:1 supports, in other word moving from {shinydashboard}
to {bs4Dash}
is accomplished by changing library(shinydashboard)
to library(bs4Dash)
.
{bs4Dash}
v2.0.0 also provides 1:1 with {shinydashboardPlus}
to ease compatibility.
Apps built with {bs4Dash}
version <= 0.5.0 are definitely not compatible with v2.0.0 due to substantial breaking changes in the API. We advise users to keep the old version for old apps and move to to the new version for newer apps.
# latest devel version
devtools::install_github("RinteRface/bs4Dash")
# from CRAN
install.packages("bs4Dash")
See a working example on shinyapps.io here. You may also run:
library(bs4Dash)
bs4DashGallery()
Issues are listed here.
I warmly thank Glyphicons creator for providing them for free with Bootstrap.
Please note that the bs4Dash project is released with a Contributor Code of Conduct. By contributing to this project, you agree toabide by its terms.
Author: RinteRface
Source Code: https://github.com/RinteRface/bs4Dash
License: View license
1668054180
extensions for shinydashboard
# for the CRAN version
install.packages("shinydashboardPlus")
# for the latest version
devtools::install_github("RinteRface/shinydashboardPlus")
shinydashboardPlus is based on the idea of ygdashboard, the latter not compatible with shinydashboard (you cannot use shinydashboard and ygdashboard at the same time). With shinydashboardPlus you can still work with the shinydashboard classic functions and enrich your dashboard with all additional functions of shinydashboardPlus!
See a demonstration here or run:
library(shinydashboardPlus)
shinydashboardPlusGallery()
Below an example of application in medicine:
Please note that the shinydashboardPlus project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.
Author: RinteRface
Source Code: https://github.com/RinteRface/shinydashboardPlus
License: View license
1667476620
Interactive data tables for R, based on the React Table library and made with reactR.
You can install reactable from CRAN with:
install.packages("reactable")
Or install the development version from GitHub with:
# install.packages("devtools")
devtools::install_github("glin/reactable")
To create a table, use reactable()
on a data frame or matrix:
library(reactable)
reactable(iris)
You can embed tables in R Markdown documents:
```{r}
library(reactable)
reactable(iris)
```
Or use them in Shiny applications:
library(shiny)
library(reactable)
ui <- fluidPage(
reactableOutput("table")
)
server <- function(input, output) {
output$table <- renderReactable({
reactable(iris)
})
}
shinyApp(ui, server)
To learn more about using reactable, check out the examples below.
![]() IE / Edge | ![]() Firefox | ![]() Chrome | ![]() Safari | ![]() Opera |
---|---|---|---|---|
IE 11*, Edge | last 2 versions | last 2 versions | last 2 versions | last 2 versions |
* Support for Internet Explorer 11 was deprecated in reactable v0.3.0.9000.
Author: glin
Source Code: https://github.com/glin/reactable
License: View license
1666400100
shinyHeatmap
The goal of {shinyHeatmap}
is to provide a free and local alternative to more advanced user tracking platform such as Hotjar.
{shinyHeatmap}
generates beautiful and persistent visual heatmaps, representing the app usage across many user sessions.
Commute explorer Shiny app (2021 Shiny Contest winner).
If you ever wondered:
You should give it a try! If you're concerned about data privacy, {shinyHeatmap}
only records x and y clicks coordinates on the window.
.center { display: block; margin-left: auto; margin-right: auto; width: 50%; }
You can install the development version of {shinyHeatmap}
from GitHub with:
# install.packages("devtools")
devtools::install_github("RinteRface/shinyHeatmap")
The app must have a www
folder since heatmap data are stored in www/heatmap-data.json
by default.
In ui.R
, wrap the UI inside with_heatmap()
. This initializes the canvas to record the click coordinates.
In server.R
, call record_heatmap()
. Overall, this recovers the coordinates of each click on the JS side and store them in www/heatmap-<USER_AGENT>-<DATE>.json
. This may be used later to preview the heatmap by aggregating all compatible user sessions. For instance, mobile platforms are not aggregated with destkop since coordinates would be incorrect. With vanilla {shiny}
templates like fluidPage
, you don't need to change anything. However, with more complex templates, you can pass the target CSS selector of the heatmap container with record_heatmap(target = ".wrapper")
. If the app takes time to load, a timeout parameters is available. This could be the case when you rely on packages such as {waiter}.
To download the heatmap locally, you must add download_heatmap()
to your app, which will read data stored in the JSON files, generate the heatmap and save it as a png file. By default, download_heatmap()
will show a tiny UI below your app. It allows to see a timeline of the app usage as shown below. To disable the UI, you can call download_heatmap(show_ui = FALSE)
, which will show all the aggregated data as well as take a screenshot of the heatmap area. Don't forget to remove record_heatmap()
if you don't want to generate extra logs! In general, you don't want to use download_heatmap()
on a deployed app since end users might not be supposed to access and view usage data.
Below shows an example to record the heatmap:
library(shiny)
library(shinyHeatmap)
# Define UI for application that draws a histogram
ui <- with_heatmap(
fluidPage(
# Application title
titlePanel("Old Faithful Geyser Data"),
# Sidebar with a slider input for number of bins
sidebarLayout(
sidebarPanel(
sliderInput(
"bins",
"Number of bins:",
min = 1,
max = 50,
value = 30
)
),
# Show a plot of the generated distribution
mainPanel(plotOutput("distPlot"))
)
)
)
# Define server logic required to draw a histogram
server <- function(input, output, session) {
record_heatmap()
output$distPlot <- renderPlot({
# generate bins based on input$bins from ui.R
x <- faithful[, 2]
bins <- seq(min(x), max(x), length.out = input$bins + 1)
# draw the histogram with the specified number of bins
hist(x, breaks = bins, col = 'darkgray', border = 'white')
})
}
# Run the application
shinyApp(ui = ui, server = server)
{shinyHeatmap}
allows to tweak the heatmap style with few lines of code. This may be achieved with the options parameter that expects a list of properties available in the heatmap.js documentation. For instance, below we change the points radius and color:
download_heatmap(
options = list(
radius = 10,
maxOpacity = .5,
minOpacity = 0,
blur = .75,
gradient = list(
".5" = "blue",
".8" = "red",
".95" = "white"
)
)
)
This is ideal if your app contains custom design like in the following example.
{shinyHeatmap}
is proudly powered by the excellent and free heatmap.js library. Thanks @pa7 for making this possible.
Author: RinteRface
Source Code: https://github.com/RinteRface/shinyHeatmap
License: Unknown, MIT licenses found