1661327100
shinyauthr
is an R package providing module functions that can be used to add an authentication layer to your shiny apps.
You can install the package from CRAN.
install.packages("shinyauthr")
Or the development version from github with the remotes package.
remotes::install_github("paulc91/shinyauthr")
Code for example apps using various UI frameworks can be found in inst/shiny-examples. You can launch 3 example apps with the runExample
function.
# login with user1 pass1 or user2 pass2
shinyauthr::runExample("basic")
shinyauthr::runExample("shinydashboard")
shinyauthr::runExample("navbarPage")
The package provides 2 module functions each with a UI and server element:
loginUI()
loginServer()
logoutUI()
logoutServer()
Note: the server modules use shiny's new (version >= 1.5.0) shiny::moduleServer
method as opposed to the shiny::callModule
method used by the now deprecated shinyauthr::login
and shinyauthr::logout
functions. These functions will remain in the package for backwards compatibility but it is recommended you migrate to the new server functions. This will require some adjustments to the module server function calling method used in your app. For details on how to migrate see the 'Migrating from callModule to moduleServer' section of Modularizing Shiny app code.
Below is a minimal reproducible example of how to use the authentication modules in a shiny app. Note that this package invisibly calls shinyjs::useShinyjs()
internally and there is no need for you to do so yourself (although there is no harm if you do).
library(shiny)
# dataframe that holds usernames, passwords and other user data
user_base <- tibble::tibble(
user = c("user1", "user2"),
password = c("pass1", "pass2"),
permissions = c("admin", "standard"),
name = c("User One", "User Two")
)
ui <- fluidPage(
# add logout button UI
div(class = "pull-right", shinyauthr::logoutUI(id = "logout")),
# add login panel UI function
shinyauthr::loginUI(id = "login"),
# setup table output to show user info after login
tableOutput("user_table")
)
server <- function(input, output, session) {
# call login module supplying data frame,
# user and password cols and reactive trigger
credentials <- shinyauthr::loginServer(
id = "login",
data = user_base,
user_col = user,
pwd_col = password,
log_out = reactive(logout_init())
)
# call the logout module with reactive trigger to hide/show
logout_init <- shinyauthr::logoutServer(
id = "logout",
active = reactive(credentials()$user_auth)
)
output$user_table <- renderTable({
# use req to only render results when credentials()$user_auth is TRUE
req(credentials()$user_auth)
credentials()$info
})
}
shinyApp(ui = ui, server = server)
When the login module is called, it returns a reactive list containing 2 elements:
user_auth
info
The initial values of these variables are FALSE
and NULL
respectively. However, given a data frame or tibble containing user names, passwords and other user data (optional), the login module will assign a user_auth
value of TRUE
if the user supplies a matching user name and password. The value of info
then becomes the row of data associated with that user which can be used in the main app to control content based on user permission variables etc.
The logout button will only show when user_auth
is TRUE
. Clicking the button will reset user_auth
back to FALSE
which will hide the button and show the login panel again.
You can set the code in your server functions to only run after a successful login through use of the req()
function inside all reactives, renders and observers. In the example above, using req(credentials()$user_auth)
inside the renderTable
function ensures the table showing the returned user information is only rendered when user_auth
is TRUE
.
Most authentication systems use browser cookies to avoid returning users having to re-enter their user name and password every time they return to the app. shinyauthr
provides a method for cookie-based automatic login, but you must create your own functions to save and load session info into a database with persistent data storage.
The first required function must accept two parameters user
and session
. The first of these is the user name for log in. The second is a randomly generated string that identifies the session. The app asks the user's web browser to save this session id as a cookie.
The second required function is called without parameters and must return a data.frame of valid user
and session
ids. If the user's web browser sends your app a cookie which appears in the session
column, then the corresponding user
is automatically logged in.
Pass these functions to the login module via shinyauthr::loginServer(...)
as the cookie_setter
and cookie_getter
parameters. A minimal example, using RSQLite as a local database to write and store user session data, is below.
library(shiny)
library(dplyr)
library(lubridate)
library(DBI)
library(RSQLite)
# connect to, or setup and connect to local SQLite db
if (file.exists("my_db_file")) {
db <- dbConnect(SQLite(), "my_db_file")
} else {
db <- dbConnect(SQLite(), "my_db_file")
dbCreateTable(db, "sessionids", c(user = "TEXT", sessionid = "TEXT", login_time = "TEXT"))
}
# a user who has not visited the app for this many days
# will be asked to login with user name and password again
cookie_expiry <- 7 # Days until session expires
# This function must accept two parameters: user and sessionid. It will be called whenever the user
# successfully logs in with a password. This function saves to your database.
add_sessionid_to_db <- function(user, sessionid, conn = db) {
tibble(user = user, sessionid = sessionid, login_time = as.character(now())) %>%
dbWriteTable(conn, "sessionids", ., append = TRUE)
}
# This function must return a data.frame with columns user and sessionid Other columns are also okay
# and will be made available to the app after log in as columns in credentials()$user_auth
get_sessionids_from_db <- function(conn = db, expiry = cookie_expiry) {
dbReadTable(conn, "sessionids") %>%
mutate(login_time = ymd_hms(login_time)) %>%
as_tibble() %>%
filter(login_time > now() - days(expiry))
}
# dataframe that holds usernames, passwords and other user data
user_base <- tibble::tibble(
user = c("user1", "user2"),
password = c("pass1", "pass2"),
permissions = c("admin", "standard"),
name = c("User One", "User Two")
)
ui <- fluidPage(
# add logout button UI
div(class = "pull-right", shinyauthr::logoutUI(id = "logout")),
# add login panel UI function
shinyauthr::loginUI(id = "login", cookie_expiry = cookie_expiry),
# setup table output to show user info after login
tableOutput("user_table")
)
server <- function(input, output, session) {
# call the logout module with reactive trigger to hide/show
logout_init <- shinyauthr::logoutServer(
id = "logout",
active = reactive(credentials()$user_auth)
)
# call login module supplying data frame, user and password cols
# and reactive trigger
credentials <- shinyauthr::loginServer(
id = "login",
data = user_base,
user_col = user,
pwd_col = password,
cookie_logins = TRUE,
sessionid_col = sessionid,
cookie_getter = get_sessionids_from_db,
cookie_setter = add_sessionid_to_db,
log_out = reactive(logout_init())
)
# pulls out the user information returned from login module
user_data <- reactive({
credentials()$info
})
output$user_table <- renderTable({
# use req to only render results when credentials()$user_auth is TRUE
req(credentials()$user_auth)
user_data() %>%
mutate(across(starts_with("login_time"), as.character))
})
}
shinyApp(ui = ui, server = server)
sodium
If you are hosting your user passwords on the internet, it is a good idea to first encrypt them with a hashing algorithm. You can use the sodium package to do this. Sodium uses a slow hashing algorithm that is specifically designed to protect stored passwords from brute-force attacks. More on this here. You then tell the shinyauthr::loginServer
module that your passwords have been hashed by sodium
and shinyauthr
will then decrypt when login is requested. Your plain text passwords must be a character vector, not factors, when hashing for this to work as shiny inputs are passed as character strings.
For example, a sample user base like the following can be incorporated for use with shinyauthr
:
# create a user base then hash passwords with sodium
# then save to an rds file in app directory
library(sodium)
user_base <- tibble::tibble(
user = c("user1", "user2"),
password = purrr::map_chr(c("pass1", "pass2"), sodium::password_store),
permissions = c("admin", "standard"),
name = c("User One", "User Two")
)
saveRDS(user_base, "user_base.rds")
# in your app code, read in the user base rds file
user_base <- readRDS("user_base.rds")
# then when calling the module set sodium_hashed = TRUE
credentials <- shinyauthr::loginServer(
id = "login",
data = user_base,
user_col = user,
pwd_col = password,
sodium_hashed = TRUE,
log_out = reactive(logout_init())
)
shinyauthr
originally borrowed some code from treysp's shiny_password template with the goal of making implementation simpler for end users and allowing the login/logout UIs to fit easily into any UI framework, including shinydashboard.
Thanks to Michael Dewar for his contribution of cookie-based authentication. Some code was borrowed from calligross's Shiny Cookie Based Authentication Example and from an earlier PR from aqualogy.
I'm not a security professional so cannot guarantee this authentication procedure to be foolproof. It is ultimately the shiny app developer's responsibility not to expose any sensitive content to the client without the necessary login criteria being met.
I would welcome any feedback on any potential vulnerabilities in the process. I know that apps hosted on a server without an SSL certificate could be open to interception of user names and passwords submitted by a user. As such I would not recommend the use of shinyauthr without a HTTPS connection.
For apps intended for use within commercial organisations, I would recommend one of RStudio's commercial shiny hosting options, or shinyproxy, both of which have built in authentication options.
However, I hope that having an easy-to-implement open-source shiny authentication option like this will prove useful when alternative options are not feasible.
Paul Campbell
Author: PaulC91
Source Code: https://github.com/PaulC91/shinyauthr
License: Unknown, MIT licenses found
1649209980
A cross-platform command line REPL for the rapid experimentation and exploration of C#. It supports intellisense, installing NuGet packages, and referencing local .NET projects and assemblies.
(click to view animation)
C# REPL provides the following features:
C# REPL is a .NET 6 global tool, and runs on Windows 10, Mac OS, and Linux. It can be installed via:
dotnet tool install -g csharprepl
If you're running on Mac OS Catalina (10.15) or later, make sure you follow any additional directions printed to the screen. You may need to update your PATH variable in order to use .NET global tools.
After installation is complete, run csharprepl
to begin. C# REPL can be updated via dotnet tool update -g csharprepl
.
Run csharprepl
from the command line to begin an interactive session. The default colorscheme uses the color palette defined by your terminal, but these colors can be changed using a theme.json
file provided as a command line argument.
Type some C# into the prompt and press Enter to run it. The result, if any, will be printed:
> Console.WriteLine("Hello World")
Hello World
> DateTime.Now.AddDays(8)
[6/7/2021 5:13:00 PM]
To evaluate multiple lines of code, use Shift+Enter to insert a newline:
> var x = 5;
var y = 8;
x * y
40
Additionally, if the statement is not a "complete statement" a newline will automatically be inserted when Enter is pressed. For example, in the below code, the first line is not a syntactically complete statement, so when we press enter we'll go down to a new line:
> if (x == 5)
| // caret position, after we press Enter on Line 1
Finally, pressing Ctrl+Enter will show a "detailed view" of the result. For example, for the DateTime.Now
expression below, on the first line we pressed Enter, and on the second line we pressed Ctrl+Enter to view more detailed output:
> DateTime.Now // Pressing Enter shows a reasonable representation
[5/30/2021 5:13:00 PM]
> DateTime.Now // Pressing Ctrl+Enter shows a detailed representation
[5/30/2021 5:13:00 PM] {
Date: [5/30/2021 12:00:00 AM],
Day: 30,
DayOfWeek: Sunday,
DayOfYear: 150,
Hour: 17,
InternalKind: 9223372036854775808,
InternalTicks: 637579915804530992,
Kind: Local,
Millisecond: 453,
Minute: 13,
Month: 5,
Second: 0,
Ticks: 637579915804530992,
TimeOfDay: [17:13:00.4530992],
Year: 2021,
_dateData: 9860951952659306800
}
A note on semicolons: C# expressions do not require semicolons, but statements do. If a statement is missing a required semicolon, a newline will be added instead of trying to run the syntatically incomplete statement; simply type the semicolon to complete the statement.
> var now = DateTime.Now; // assignment statement, semicolon required
> DateTime.Now.AddDays(8) // expression, we don't need a semicolon
[6/7/2021 5:03:05 PM]
Use the #r
command to add assembly or nuget references.
#r "AssemblyName"
or #r "path/to/assembly.dll"
#r "path/to/project.csproj"
. Solution files (.sln) can also be referenced.#r "nuget: PackageName"
to install the latest version of a package, or #r "nuget: PackageName, 13.0.5"
to install a specific version (13.0.5 in this case).To run ASP.NET applications inside the REPL, start the csharprepl
application with the --framework
parameter, specifying the Microsoft.AspNetCore.App
shared framework. Then, use the above #r
command to reference the application DLL. See the Command Line Configuration section below for more details.
csharprepl --framework Microsoft.AspNetCore.App
The C# REPL supports multiple configuration flags to control startup, behavior, and appearance:
csharprepl [OPTIONS] [response-file.rsp] [script-file.csx] [-- <additional-arguments>]
Supported options are:
-r <dll>
or --reference <dll>
: Reference an assembly, project file, or nuget package. Can be specified multiple times. Uses the same syntax as #r
statements inside the REPL. For example, csharprepl -r "nuget:Newtonsoft.Json" "path/to/myproj.csproj"
-u <namespace>
or --using <namespace>
: Add a using statement. Can be specified multiple times.-f <framework>
or --framework <framework>
: Reference a shared framework. The available shared frameworks depends on the local .NET installation, and can be useful when running an ASP.NET application from the REPL. Example frameworks are:-t <theme.json>
or --theme <theme.json>
: Read a theme file for syntax highlighting. This theme file associates C# syntax classifications with colors. The color values can be full RGB, or ANSI color names (defined in your terminal's theme). The NO_COLOR standard is supported.--trace
: Produce a trace file in the current directory that logs CSharpRepl internals. Useful for CSharpRepl bug reports.-v
or --version
: Show version number and exit.-h
or --help
: Show help and exit.response-file.rsp
: A filepath of an .rsp file, containing any of the above command line options.script-file.csx
: A filepath of a .csx file, containing lines of C# to evaluate before starting the REPL. Arguments to this script can be passed as <additional-arguments>
, after a double hyphen (--
), and will be available in a global args
variable.If you have dotnet-suggest
enabled, all options can be tab-completed, including values provided to --framework
and .NET namespaces provided to --using
.
C# REPL is a standalone software application, but it can be useful to integrate it with other developer tools:
To add C# REPL as a menu entry in Windows Terminal, add the following profile to Windows Terminal's settings.json
configuration file (under the JSON property profiles.list
):
{
"name": "C# REPL",
"commandline": "csharprepl"
},
To get the exact colors shown in the screenshots in this README, install the Windows Terminal Dracula theme.
To use the C# REPL with Visual Studio Code, simply run the csharprepl
command in the Visual Studio Code terminal. To send commands to the REPL, use the built-in Terminal: Run Selected Text In Active Terminal
command from the Command Palette (workbench.action.terminal.runSelectedText
).
To add the C# REPL to the Windows Start Menu for quick access, you can run the following PowerShell command, which will start C# REPL in Windows Terminal:
$shell = New-Object -ComObject WScript.Shell
$shortcut = $shell.CreateShortcut("$env:appdata\Microsoft\Windows\Start Menu\Programs\csharprepl.lnk")
$shortcut.TargetPath = "wt.exe"
$shortcut.Arguments = "-w 0 nt csharprepl.exe"
$shortcut.Save()
You may also wish to add a shorter alias for C# REPL, which can be done by creating a .cmd
file somewhere on your path. For example, put the following contents in C:\Users\username\.dotnet\tools\csr.cmd
:
wt -w 0 nt csharprepl
This will allow you to launch C# REPL by running csr
from anywhere that accepts Windows commands, like the Window Run dialog.
This project is far from being the first REPL for C#. Here are some other projects; if this project doesn't suit you, another one might!
Visual Studio's C# Interactive pane is full-featured (it has syntax highlighting and intellisense) and is part of Visual Studio. This deep integration with Visual Studio is both a benefit from a workflow perspective, and a drawback as it's not cross-platform. As far as I know, the C# Interactive pane does not support NuGet packages or navigating to documentation/source code. Subjectively, it does not follow typical command line keybindings, so can feel a bit foreign.
csi.exe ships with C# and is a command line REPL. It's great because it's a cross platform REPL that comes out of the box, but it doesn't support syntax highlighting or autocompletion.
dotnet script allows you to run C# scripts from the command line. It has a REPL built-in, but the predominant focus seems to be as a script runner. It's a great tool, though, and has a strong community following.
dotnet interactive is a tool from Microsoft that creates a Jupyter notebook for C#, runnable through Visual Studio Code. It also provides a general framework useful for running REPLs.
Download Details:
Author: waf
Source Code: https://github.com/waf/CSharpRepl
License: MPL-2.0 License
1592807820
What is 2FA
Two-Factor Authentication (or 2FA as it often referred to) is an extra layer of security that is used to provide users an additional level of protection when securing access to an account.
Employing a 2FA mechanism is a vast improvement in security over the Singe-Factor Authentication method of simply employing a username and password. Using this method, accounts that have 2FA enabled, require the user to enter a one-time passcode that is generated by an external application. The 2FA passcode (usually a six-digit number) is required to be input into the passcode field before access is granted. The 2FA input is usually required directly after the username and password are entered by the client.
#tutorials #2fa #access #account security #authentication #authentication method #authentication token #cli #command line #cpanel #feature manager #google authenticator #one time password #otp #otp authentication #passcode #password #passwords #qr code #security #security code #security policy #security practices #single factor authentication #time-based one-time password #totp #two factor authentication #whm
1661327100
shinyauthr
is an R package providing module functions that can be used to add an authentication layer to your shiny apps.
You can install the package from CRAN.
install.packages("shinyauthr")
Or the development version from github with the remotes package.
remotes::install_github("paulc91/shinyauthr")
Code for example apps using various UI frameworks can be found in inst/shiny-examples. You can launch 3 example apps with the runExample
function.
# login with user1 pass1 or user2 pass2
shinyauthr::runExample("basic")
shinyauthr::runExample("shinydashboard")
shinyauthr::runExample("navbarPage")
The package provides 2 module functions each with a UI and server element:
loginUI()
loginServer()
logoutUI()
logoutServer()
Note: the server modules use shiny's new (version >= 1.5.0) shiny::moduleServer
method as opposed to the shiny::callModule
method used by the now deprecated shinyauthr::login
and shinyauthr::logout
functions. These functions will remain in the package for backwards compatibility but it is recommended you migrate to the new server functions. This will require some adjustments to the module server function calling method used in your app. For details on how to migrate see the 'Migrating from callModule to moduleServer' section of Modularizing Shiny app code.
Below is a minimal reproducible example of how to use the authentication modules in a shiny app. Note that this package invisibly calls shinyjs::useShinyjs()
internally and there is no need for you to do so yourself (although there is no harm if you do).
library(shiny)
# dataframe that holds usernames, passwords and other user data
user_base <- tibble::tibble(
user = c("user1", "user2"),
password = c("pass1", "pass2"),
permissions = c("admin", "standard"),
name = c("User One", "User Two")
)
ui <- fluidPage(
# add logout button UI
div(class = "pull-right", shinyauthr::logoutUI(id = "logout")),
# add login panel UI function
shinyauthr::loginUI(id = "login"),
# setup table output to show user info after login
tableOutput("user_table")
)
server <- function(input, output, session) {
# call login module supplying data frame,
# user and password cols and reactive trigger
credentials <- shinyauthr::loginServer(
id = "login",
data = user_base,
user_col = user,
pwd_col = password,
log_out = reactive(logout_init())
)
# call the logout module with reactive trigger to hide/show
logout_init <- shinyauthr::logoutServer(
id = "logout",
active = reactive(credentials()$user_auth)
)
output$user_table <- renderTable({
# use req to only render results when credentials()$user_auth is TRUE
req(credentials()$user_auth)
credentials()$info
})
}
shinyApp(ui = ui, server = server)
When the login module is called, it returns a reactive list containing 2 elements:
user_auth
info
The initial values of these variables are FALSE
and NULL
respectively. However, given a data frame or tibble containing user names, passwords and other user data (optional), the login module will assign a user_auth
value of TRUE
if the user supplies a matching user name and password. The value of info
then becomes the row of data associated with that user which can be used in the main app to control content based on user permission variables etc.
The logout button will only show when user_auth
is TRUE
. Clicking the button will reset user_auth
back to FALSE
which will hide the button and show the login panel again.
You can set the code in your server functions to only run after a successful login through use of the req()
function inside all reactives, renders and observers. In the example above, using req(credentials()$user_auth)
inside the renderTable
function ensures the table showing the returned user information is only rendered when user_auth
is TRUE
.
Most authentication systems use browser cookies to avoid returning users having to re-enter their user name and password every time they return to the app. shinyauthr
provides a method for cookie-based automatic login, but you must create your own functions to save and load session info into a database with persistent data storage.
The first required function must accept two parameters user
and session
. The first of these is the user name for log in. The second is a randomly generated string that identifies the session. The app asks the user's web browser to save this session id as a cookie.
The second required function is called without parameters and must return a data.frame of valid user
and session
ids. If the user's web browser sends your app a cookie which appears in the session
column, then the corresponding user
is automatically logged in.
Pass these functions to the login module via shinyauthr::loginServer(...)
as the cookie_setter
and cookie_getter
parameters. A minimal example, using RSQLite as a local database to write and store user session data, is below.
library(shiny)
library(dplyr)
library(lubridate)
library(DBI)
library(RSQLite)
# connect to, or setup and connect to local SQLite db
if (file.exists("my_db_file")) {
db <- dbConnect(SQLite(), "my_db_file")
} else {
db <- dbConnect(SQLite(), "my_db_file")
dbCreateTable(db, "sessionids", c(user = "TEXT", sessionid = "TEXT", login_time = "TEXT"))
}
# a user who has not visited the app for this many days
# will be asked to login with user name and password again
cookie_expiry <- 7 # Days until session expires
# This function must accept two parameters: user and sessionid. It will be called whenever the user
# successfully logs in with a password. This function saves to your database.
add_sessionid_to_db <- function(user, sessionid, conn = db) {
tibble(user = user, sessionid = sessionid, login_time = as.character(now())) %>%
dbWriteTable(conn, "sessionids", ., append = TRUE)
}
# This function must return a data.frame with columns user and sessionid Other columns are also okay
# and will be made available to the app after log in as columns in credentials()$user_auth
get_sessionids_from_db <- function(conn = db, expiry = cookie_expiry) {
dbReadTable(conn, "sessionids") %>%
mutate(login_time = ymd_hms(login_time)) %>%
as_tibble() %>%
filter(login_time > now() - days(expiry))
}
# dataframe that holds usernames, passwords and other user data
user_base <- tibble::tibble(
user = c("user1", "user2"),
password = c("pass1", "pass2"),
permissions = c("admin", "standard"),
name = c("User One", "User Two")
)
ui <- fluidPage(
# add logout button UI
div(class = "pull-right", shinyauthr::logoutUI(id = "logout")),
# add login panel UI function
shinyauthr::loginUI(id = "login", cookie_expiry = cookie_expiry),
# setup table output to show user info after login
tableOutput("user_table")
)
server <- function(input, output, session) {
# call the logout module with reactive trigger to hide/show
logout_init <- shinyauthr::logoutServer(
id = "logout",
active = reactive(credentials()$user_auth)
)
# call login module supplying data frame, user and password cols
# and reactive trigger
credentials <- shinyauthr::loginServer(
id = "login",
data = user_base,
user_col = user,
pwd_col = password,
cookie_logins = TRUE,
sessionid_col = sessionid,
cookie_getter = get_sessionids_from_db,
cookie_setter = add_sessionid_to_db,
log_out = reactive(logout_init())
)
# pulls out the user information returned from login module
user_data <- reactive({
credentials()$info
})
output$user_table <- renderTable({
# use req to only render results when credentials()$user_auth is TRUE
req(credentials()$user_auth)
user_data() %>%
mutate(across(starts_with("login_time"), as.character))
})
}
shinyApp(ui = ui, server = server)
sodium
If you are hosting your user passwords on the internet, it is a good idea to first encrypt them with a hashing algorithm. You can use the sodium package to do this. Sodium uses a slow hashing algorithm that is specifically designed to protect stored passwords from brute-force attacks. More on this here. You then tell the shinyauthr::loginServer
module that your passwords have been hashed by sodium
and shinyauthr
will then decrypt when login is requested. Your plain text passwords must be a character vector, not factors, when hashing for this to work as shiny inputs are passed as character strings.
For example, a sample user base like the following can be incorporated for use with shinyauthr
:
# create a user base then hash passwords with sodium
# then save to an rds file in app directory
library(sodium)
user_base <- tibble::tibble(
user = c("user1", "user2"),
password = purrr::map_chr(c("pass1", "pass2"), sodium::password_store),
permissions = c("admin", "standard"),
name = c("User One", "User Two")
)
saveRDS(user_base, "user_base.rds")
# in your app code, read in the user base rds file
user_base <- readRDS("user_base.rds")
# then when calling the module set sodium_hashed = TRUE
credentials <- shinyauthr::loginServer(
id = "login",
data = user_base,
user_col = user,
pwd_col = password,
sodium_hashed = TRUE,
log_out = reactive(logout_init())
)
shinyauthr
originally borrowed some code from treysp's shiny_password template with the goal of making implementation simpler for end users and allowing the login/logout UIs to fit easily into any UI framework, including shinydashboard.
Thanks to Michael Dewar for his contribution of cookie-based authentication. Some code was borrowed from calligross's Shiny Cookie Based Authentication Example and from an earlier PR from aqualogy.
I'm not a security professional so cannot guarantee this authentication procedure to be foolproof. It is ultimately the shiny app developer's responsibility not to expose any sensitive content to the client without the necessary login criteria being met.
I would welcome any feedback on any potential vulnerabilities in the process. I know that apps hosted on a server without an SSL certificate could be open to interception of user names and passwords submitted by a user. As such I would not recommend the use of shinyauthr without a HTTPS connection.
For apps intended for use within commercial organisations, I would recommend one of RStudio's commercial shiny hosting options, or shinyproxy, both of which have built in authentication options.
However, I hope that having an easy-to-implement open-source shiny authentication option like this will prove useful when alternative options are not feasible.
Paul Campbell
Author: PaulC91
Source Code: https://github.com/PaulC91/shinyauthr
License: Unknown, MIT licenses found
1647064260
Run C# scripts from the .NET CLI, define NuGet packages inline and edit/debug them in VS Code - all of that with full language services support from OmniSharp.
Name | Version | Framework(s) |
---|---|---|
dotnet-script (global tool) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script (CLI as Nuget) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script.Core | netcoreapp3.1 , netstandard2.0 | |
Dotnet.Script.DependencyModel | netstandard2.0 | |
Dotnet.Script.DependencyModel.Nuget | netstandard2.0 |
The only thing we need to install is .NET Core 3.1 or .NET 5.0 SDK.
.NET Core 2.1 introduced the concept of global tools meaning that you can install dotnet-script
using nothing but the .NET CLI.
dotnet tool install -g dotnet-script
You can invoke the tool using the following command: dotnet-script
Tool 'dotnet-script' (version '0.22.0') was successfully installed.
The advantage of this approach is that you can use the same command for installation across all platforms. .NET Core SDK also supports viewing a list of installed tools and their uninstallation.
dotnet tool list -g
Package Id Version Commands
---------------------------------------------
dotnet-script 0.22.0 dotnet-script
dotnet tool uninstall dotnet-script -g
Tool 'dotnet-script' (version '0.22.0') was successfully uninstalled.
choco install dotnet.script
We also provide a PowerShell script for installation.
(new-object Net.WebClient).DownloadString("https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.ps1") | iex
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | bash
If permission is denied we can try with sudo
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | sudo bash
A Dockerfile for running dotnet-script in a Linux container is available. Build:
cd build
docker build -t dotnet-script -f Dockerfile ..
And run:
docker run -it dotnet-script --version
You can manually download all the releases in zip
format from the GitHub releases page.
Our typical helloworld.csx
might look like this:
Console.WriteLine("Hello world!");
That is all it takes and we can execute the script. Args are accessible via the global Args array.
dotnet script helloworld.csx
Simply create a folder somewhere on your system and issue the following command.
dotnet script init
This will create main.csx
along with the launch configuration needed to debug the script in VS Code.
.
├── .vscode
│ └── launch.json
├── main.csx
└── omnisharp.json
We can also initialize a folder using a custom filename.
dotnet script init custom.csx
Instead of main.csx
which is the default, we now have a file named custom.csx
.
.
├── .vscode
│ └── launch.json
├── custom.csx
└── omnisharp.json
Note: Executing
dotnet script init
inside a folder that already contains one or more script files will not create themain.csx
file.
Scripts can be executed directly from the shell as if they were executables.
foo.csx arg1 arg2 arg3
OSX/Linux
Just like all scripts, on OSX/Linux you need to have a
#!
and mark the file as executable via chmod +x foo.csx. If you use dotnet script init to create your csx it will automatically have the#!
directive and be marked as executable.
The OSX/Linux shebang directive should be #!/usr/bin/env dotnet-script
#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");
You can execute your script using dotnet script or dotnet-script, which allows you to pass arguments to control your script execution more.
foo.csx arg1 arg2 arg3
dotnet script foo.csx -- arg1 arg2 arg3
dotnet-script foo.csx -- arg1 arg2 arg3
All arguments after --
are passed to the script in the following way:
dotnet script foo.csx -- arg1 arg2 arg3
Then you can access the arguments in the script context using the global Args
collection:
foreach (var arg in Args)
{
Console.WriteLine(arg);
}
All arguments before --
are processed by dotnet script
. For example, the following command-line
dotnet script -d foo.csx -- -d
will pass the -d
before --
to dotnet script
and enable the debug mode whereas the -d
after --
is passed to script for its own interpretation of the argument.
dotnet script
has built-in support for referencing NuGet packages directly from within the script.
#r "nuget: AutoMapper, 6.1.0"
Note: Omnisharp needs to be restarted after adding a new package reference
We can define package sources using a NuGet.Config
file in the script root folder. In addition to being used during execution of the script, it will also be used by OmniSharp
that provides language services for packages resolved from these package sources.
As an alternative to maintaining a local NuGet.Config
file we can define these package sources globally either at the user level or at the computer level as described in Configuring NuGet Behaviour
It is also possible to specify packages sources when executing the script.
dotnet script foo.csx -s https://SomePackageSource
Multiple packages sources can be specified like this:
dotnet script foo.csx -s https://SomePackageSource -s https://AnotherPackageSource
Dotnet-Script can create a standalone executable or DLL for your script.
Switch | Long switch | description |
---|---|---|
-o | --output | Directory where the published executable should be placed. Defaults to a 'publish' folder in the current directory. |
-n | --name | The name for the generated DLL (executable not supported at this time). Defaults to the name of the script. |
--dll | Publish to a .dll instead of an executable. | |
-c | --configuration | Configuration to use for publishing the script [Release/Debug]. Default is "Debug" |
-d | --debug | Enables debug output. |
-r | --runtime | The runtime used when publishing the self contained executable. Defaults to your current runtime. |
The executable you can run directly independent of dotnet install, while the DLL can be run using the dotnet CLI like this:
dotnet script exec {path_to_dll} -- arg1 arg2
We provide two types of caching, the dependency cache
and the execution cache
which is explained in detail below. In order for any of these caches to be enabled, it is required that all NuGet package references are specified using an exact version number. The reason for this constraint is that we need to make sure that we don't execute a script with a stale dependency graph.
In order to resolve the dependencies for a script, a dotnet restore
is executed under the hood to produce a project.assets.json
file from which we can figure out all the dependencies we need to add to the compilation. This is an out-of-process operation and represents a significant overhead to the script execution. So this cache works by looking at all the dependencies specified in the script(s) either in the form of NuGet package references or assembly file references. If these dependencies matches the dependencies from the last script execution, we skip the restore and read the dependencies from the already generated project.assets.json
file. If any of the dependencies has changed, we must restore again to obtain the new dependency graph.
In order to execute a script it needs to be compiled first and since that is a CPU and time consuming operation, we make sure that we only compile when the source code has changed. This works by creating a SHA256 hash from all the script files involved in the execution. This hash is written to a temporary location along with the DLL that represents the result of the script compilation. When a script is executed the hash is computed and compared with the hash from the previous compilation. If they match there is no need to recompile and we run from the already compiled DLL. If the hashes don't match, the cache is invalidated and we recompile.
You can override this automatic caching by passing --no-cache flag, which will bypass both caches and cause dependency resolution and script compilation to happen every time we execute the script.
The temporary location used for caches is a sub-directory named dotnet-script
under (in order of priority):
DOTNET_SCRIPT_CACHE_LOCATION
, if defined and value is not empty.$XDG_CACHE_HOME
if defined otherwise $HOME/.cache
~/Library/Caches
Path.GetTempPath
for the platform.The days of debugging scripts using Console.WriteLine
are over. One major feature of dotnet script
is the ability to debug scripts directly in VS Code. Just set a breakpoint anywhere in your script file(s) and hit F5(start debugging)
Script packages are a way of organizing reusable scripts into NuGet packages that can be consumed by other scripts. This means that we now can leverage scripting infrastructure without the need for any kind of bootstrapping.
A script package is just a regular NuGet package that contains script files inside the content
or contentFiles
folder.
The following example shows how the scripts are laid out inside the NuGet package according to the standard convention .
└── contentFiles
└── csx
└── netstandard2.0
└── main.csx
This example contains just the main.csx
file in the root folder, but packages may have multiple script files either in the root folder or in subfolders below the root folder.
When loading a script package we will look for an entry point script to be loaded. This entry point script is identified by one of the following.
main.csx
in the root folderIf the entry point script cannot be determined, we will simply load all the scripts files in the package.
The advantage with using an entry point script is that we can control loading other scripts from the package.
To consume a script package all we need to do specify the NuGet package in the #load
directive.
The following example loads the simple-targets package that contains script files to be included in our script.
#load "nuget:simple-targets-csx, 6.0.0"
using static SimpleTargets;
var targets = new TargetDictionary();
targets.Add("default", () => Console.WriteLine("Hello, world!"));
Run(Args, targets);
Note: Debugging also works for script packages so that we can easily step into the scripts that are brought in using the
#load
directive.
Scripts don't actually have to exist locally on the machine. We can also execute scripts that are made available on an http(s)
endpoint.
This means that we can create a Gist on Github and execute it just by providing the URL to the Gist.
This Gist contains a script that prints out "Hello World"
We can execute the script like this
dotnet script https://gist.githubusercontent.com/seesharper/5d6859509ea8364a1fdf66bbf5b7923d/raw/0a32bac2c3ea807f9379a38e251d93e39c8131cb/HelloWorld.csx
That is a pretty long URL, so why don't make it a TinyURL like this:
dotnet script https://tinyurl.com/y8cda9zt
A pretty common scenario is that we have logic that is relative to the script path. We don't want to require the user to be in a certain directory for these paths to resolve correctly so here is how to provide the script path and the script folder regardless of the current working directory.
public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);
Tip: Put these methods as top level methods in a separate script file and
#load
that file wherever access to the script path and/or folder is needed.
This release contains a C# REPL (Read-Evaluate-Print-Loop). The REPL mode ("interactive mode") is started by executing dotnet-script
without any arguments.
The interactive mode allows you to supply individual C# code blocks and have them executed as soon as you press Enter. The REPL is configured with the same default set of assembly references and using statements as regular CSX script execution.
Once dotnet-script
starts you will see a prompt for input. You can start typing C# code there.
~$ dotnet script
> var x = 1;
> x+x
2
If you submit an unterminated expression into the REPL (no ;
at the end), it will be evaluated and the result will be serialized using a formatter and printed in the output. This is a bit more interesting than just calling ToString()
on the object, because it attempts to capture the actual structure of the object. For example:
~$ dotnet script
> var x = new List<string>();
> x.Add("foo");
> x
List<string>(1) { "foo" }
> x.Add("bar");
> x
List<string>(2) { "foo", "bar" }
>
REPL also supports inline Nuget packages - meaning the Nuget packages can be installed into the REPL from within the REPL. This is done via our #r
and #load
from Nuget support and uses identical syntax.
~$ dotnet script
> #r "nuget: Automapper, 6.1.1"
> using AutoMapper;
> typeof(MapperConfiguration)
[AutoMapper.MapperConfiguration]
> #load "nuget: simple-targets-csx, 6.0.0";
> using static SimpleTargets;
> typeof(TargetDictionary)
[Submission#0+SimpleTargets+TargetDictionary]
Using Roslyn syntax parsing, we also support multiline REPL mode. This means that if you have an uncompleted code block and press Enter, we will automatically enter the multiline mode. The mode is indicated by the *
character. This is particularly useful for declaring classes and other more complex constructs.
~$ dotnet script
> class Foo {
* public string Bar {get; set;}
* }
> var foo = new Foo();
Aside from the regular C# script code, you can invoke the following commands (directives) from within the REPL:
Command | Description |
---|---|
#load | Load a script into the REPL (same as #load usage in CSX) |
#r | Load an assembly into the REPL (same as #r usage in CSX) |
#reset | Reset the REPL back to initial state (without restarting it) |
#cls | Clear the console screen without resetting the REPL state |
#exit | Exits the REPL |
You can execute a CSX script and, at the end of it, drop yourself into the context of the REPL. This way, the REPL becomes "seeded" with your code - all the classes, methods or variables are available in the REPL context. This is achieved by running a script with an -i
flag.
For example, given the following CSX script:
var msg = "Hello World";
Console.WriteLine(msg);
When you run this with the -i
flag, Hello World
is printed, REPL starts and msg
variable is available in the REPL context.
~$ dotnet script foo.csx -i
Hello World
>
You can also seed the REPL from inside the REPL - at any point - by invoking a #load
directive pointed at a specific file. For example:
~$ dotnet script
> #load "foo.csx"
Hello World
>
The following example shows how we can pipe data in and out of a script.
The UpperCase.csx
script simply converts the standard input to upper case and writes it back out to standard output.
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper());
}
We can now simply pipe the output from one command into our script like this.
echo "This is some text" | dotnet script UpperCase.csx
THIS IS SOME TEXT
The first thing we need to do add the following to the launch.config
file that allows VS Code to debug a running process.
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickProcess}"
}
To debug this script we need a way to attach the debugger in VS Code and the simplest thing we can do here is to wait for the debugger to attach by adding this method somewhere.
public static void WaitForDebugger()
{
Console.WriteLine("Attach Debugger (VS Code)");
while(!Debugger.IsAttached)
{
}
}
To debug the script when executing it from the command line we can do something like
WaitForDebugger();
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper()); // <- SET BREAKPOINT HERE
}
Now when we run the script from the command line we will get
$ echo "This is some text" | dotnet script UpperCase.csx
Attach Debugger (VS Code)
This now gives us a chance to attach the debugger before stepping into the script and from VS Code, select the .NET Core Attach
debugger and pick the process that represents the executing script.
Once that is done we should see our breakpoint being hit.
By default, scripts will be compiled using the debug
configuration. This is to ensure that we can debug a script in VS Code as well as attaching a debugger for long running scripts.
There are however situations where we might need to execute a script that is compiled with the release
configuration. For instance, running benchmarks using BenchmarkDotNet is not possible unless the script is compiled with the release
configuration.
We can specify this when executing the script.
dotnet script foo.csx -c release
Starting from version 0.50.0, dotnet-script
supports .Net Core 3.0 and all the C# 8 features. The way we deal with nullable references types in dotnet-script
is that we turn every warning related to nullable reference types into compiler errors. This means every warning between CS8600
and CS8655
are treated as an error when compiling the script.
Nullable references types are turned off by default and the way we enable it is using the #nullable enable
compiler directive. This means that existing scripts will continue to work, but we can now opt-in on this new feature.
#!/usr/bin/env dotnet-script
#nullable enable
string name = null;
Trying to execute the script will result in the following error
main.csx(5,15): error CS8625: Cannot convert null literal to non-nullable reference type.
We will also see this when working with scripts in VS Code under the problems panel.
Download Details:
Author: filipw
Source Code: https://github.com/filipw/dotnet-script
License: MIT License
1619571780
March 25, 2021 Deepak@321 0 Comments
Welcome to my blog, In this article, we will learn the top 20 most useful python modules or packages and these modules every Python developer should know.
Hello everybody and welcome back so in this article I’m going to be sharing with you 20 Python modules you need to know. Now I’ve split these python modules into four different categories to make little bit easier for us and the categories are:
Near the end of the article, I also share my personal favorite Python module so make sure you stay tuned to see what that is also make sure to share with me in the comments down below your favorite Python module.
#python #packages or libraries #python 20 modules #python 20 most usefull modules #python intersting modules #top 20 python libraries #top 20 python modules #top 20 python packages