Simple Development Server with Live-reload Capability for Julia

Live Server for Julia  

This is a simple and lightweight development web-server written in Julia, based on HTTP.jl. It has live-reload capability, i.e. when modifying a file, every browser (tab) currently displaying the corresponding page is automatically refreshed.

LiveServer is inspired from Python's http.server and Node's browsersync.

Installation

To install it in Julia ≥ 1.3, use the package manager with

pkg> add LiveServer

For Julia [1.0, 1.3), you can use LiveServer's version 0.7.4:

pkg> add LiveServer@0.7.4

Make it a shell command

LiveServer is a small package and fast to load with one main functionality (serve), it can be convenient to make it a shell command: (I'm using the name lss here but you could use something else):

alias lss='julia -e "import LiveServer as LS; LS.serve(launch_browser=true)"'

you can then use lss in any directory to show a directory listing in your browser, and if the directory has an index.html then that will be rendered in your browser.

Usage

The main function LiveServer exports is serve which starts listening to the current folder and makes its content available to a browser. The following code creates an example directory and serves it:

julia> using LiveServer
julia> LiveServer.example() # creates an "example/" folder with some files
julia> cd("example")
julia> serve() # starts the local server & the file watching
✓ LiveServer listening on http://localhost:8000/ ...
  (use CTRL+C to shut down)

Open a Browser and go to http://localhost:8000/ to see the content being rendered; try modifying files (e.g. index.html) and watch the changes being rendered immediately in the browser.

In the REPL:

julia> using LiveServer
julia> serve(host="0.0.0.0", port=8001, dir=".") # starts the remote server & the file watching
✓ LiveServer listening on http://0.0.0.0:8001...
  (use CTRL+C to shut down)

In the terminal:

julia -e 'using LiveServer; serve(host="0.0.0.0", port=8001, dir=".")'

Open a browser and go to https://localhost:8001/ to see the rendered content of index.html or, if it doesn't exist, the content of the directory. You can set the port to a custom number. This is similar to the http.server in Python.

Serve docs

servedocs is a convenience function that runs Documenter along with LiveServer to watch your doc files for any changes and render them in your browser when modifications are detected.

Assuming you are in directory/to/YourPackage.jl, that you have a docs/ folder as prescribed by Documenter.jl and LiveServer installed in your global environment, you can run:

$ julia

pkg> activate docs

julia> using YourPackage, LiveServer

julia> servedocs()
[ Info: SetupBuildDirectory: setting up build directory.
[ Info: ExpandTemplates: expanding markdown templates.
...
└ Deploying: ✘
✓ LiveServer listening on http://localhost:8000/ ...
  (use CTRL+C to shut down)

Open a browser and go to http://localhost:8000/ to see your docs being rendered; try modifying files (e.g. docs/index.md) and watch the changes being rendered in the browser.

To run the server with one line of code, run:

$ julia --project=docs -ie 'using YourPackage, LiveServer; servedocs()'

Note: this works with Literate.jl as well. See the docs.

Download Details:

Author: tlienart
Source Code: https://github.com/tlienart/LiveServer.jl 
License: View license

#julia #webserver #websocket 

Simple Development Server with Live-reload Capability for Julia

Как создать веб-сервер Node.js с помощью модуля HTTP

Когда вы открываете веб-страницу в своем браузере, вы отправляете запрос на другой компьютер, подключенный к Интернету, который отвечает на возврат веб-страницы вам. Та машина, с которой вы общаетесь через Интернет, является веб-сервером. Веб-сервер принимает HTTP-запросы от клиентов, таких как ваш браузер, и отвечает HTTP-ответом, например JSON, от API или HTML-страницы.

Для возврата веб-страницы серверу требуется большое количество программного обеспечения. Они классифицируют это программное обеспечение на две части: интерфейс и сервер. Интерфейсное программирование связано с тем, как представлен материал, например, с цветом панели навигации и стилем текста. Внутренний код занимается обменом, обработкой и хранением данных.

В этой статье вы узнаете, как создавать веб-серверы с помощью HTTPмодуля Node.js.

Требования.

  • Убедитесь, что на вашем компьютере для разработки установлен Node.js. В этом руководстве используется Node.js 12.22.5.
  • Платформа Node.js упрощает создание веб-серверов из коробки. Для начала убедитесь, что вы понимаете основы Node.js.
  • Мы также использовали асинхронное программирование в одной из наших частей.

Шаг 1: Установка базового HTTP-сервера.

Начните с создания сервера, который доставляет пользователю обычный текст. Это объяснит основные идеи, необходимые для создания сервера, который заложит основу для возврата более сложных форм данных, таких как JSON.

Во-первых, мы должны создать доступную среду кодирования, в которой можно выполнять наши и другие действия в этой статье. Создайте папку, называемую web-serverв терминале:

$ mkdir web-server

Затем перейдите в эту папку:

$ cd web-server

Создайте следующий файл для размещения кода:

$ touch hello.js

В текстовом редакторе откройте файл. Мы будем использовать nano, потому что он легко доступен на терминале:

$ nano hello.js

Начнем с загрузки HTTPмодуля, который включен во все установки Node.js. В hello.js добавьте следующую строку:

web-server/hello.js

const http = require("http");

В HTTPмодуле есть метод создания сервера, который мы увидим позже.

Следующим шагом будет объявление двух переменных, хоста и порта, к которым будет привязан наш сервер:

web-server/hello.js

...
const host = 'localhost';
const port = 8000;

Веб-серверы, как было сказано ранее, получают запросы от браузеров и других пользователей. Доступ к веб-серверу можно получить, введя доменное имя, которое DNS-сервер преобразует в IP-адрес. IP-адрес — это логическая числовая последовательность, которая идентифицирует машину в сети, например в Интернете.

Значение localhostпредставляет собой частный адрес, используемый компьютерами для подключения к самим себе. Часто он равен внутреннему IP-адресу 127.0.0.1и доступен только локальному компьютеру.

Порт — это значение, которое серверы используют для подключения к нашему IP-адресу в качестве конечной точки или «двери». В этом примере мы запустим наш веб-сервер на порту 8000. Порты 8080и 8000обычно используются в качестве портов по умолчанию при разработке, и в большинстве случаев разработчики будут использовать их для HTTP-серверов.

Когда мы подключаем наш сервер к этому хосту и порту, мы можем получить к нему доступ, посетив http://localhost:8000 в локальном браузере.

Давайте создадим специальную функцию, называемую прослушивателем запросов в Node.js. Эта функция отвечает за обработку входящих HTTP-запросов и возврат HTTP-ответов. У этой функции должно быть два входа: объект запроса и объект ответа. Объект запроса хранит все данные из входящего HTTP-запроса. Объект ответа отвечает за отправку ответов HTTP на сервер.

Когда кто-то достигает нашего веб-сервера, мы хотим, чтобы он доставил сообщение « My Web Server!»

Далее добавим эту функцию:

web-server/hello.js

...const requestListener = function (req, res) {
    res.writeHead(200);
    res.end("My Web Server!");
};

Функция обычно называется в честь того, что она делает. Например, если бы мы написали функцию прослушивания запросов, которая возвращает список имен, мы бы, вероятно, назвали ее listNames (). Поскольку это пример сценария, мы назовем его requestListener.

В Node.js все методы прослушивания запросов принимают два аргумента: reqи res(мы можем называть их по-разному, если хотим). HTTP-запрос пользователя хранится в объекте Request, который соответствует первому параметру req. Мы создаем HTTP-ответ, который отправляем пользователю, взаимодействуя с объектом Response во втором параметре, res.

В первой строке res.writeHead(200);определяется код состояния HTTP ответа. Коды состояния HTTP показывают, насколько успешно сервер обработал HTTP-запрос. Код состояния 200соответствует “OK”в этом случае.

Следующая строка функции res.end (“My Web Server!”);возвращает HTTP-ответ клиенту, который его запросил. Этот метод извлекает любые доступные данные с сервера. В этом сценарии возвращаются текстовые данные.

Наконец, мы можем построить наш сервер и использовать наш прослушиватель запросов:

web-server/hello.js

const http = require('http');  const requestListener = function (req, res) {
   res.writeHead(200);
   res.end('Hello, World!');
 }const server = http.createServer(requestListener); server.listen(8000);

С помощью CTRL+X, вы можете сохранить и выйти nano.

В первой строке мы используем функцию httpмодуля createServer()для создания нового serverобъекта. Этот сервер обрабатывает HTTP-запросы и перенаправляет их нашему requestListener()методу.

После того, как мы построили наш сервер, нам нужно будет предоставить ему сетевой адрес. Это то, что мы делаем с помощью server. listen ()метода. Он принимает три аргумента: port, hostи функцию обратного вызова, которая вызывается, когда сервер начинает прослушивание.

Каждый из этих параметров является необязательным, но лучше указать, какой порт и хост мы хотим использовать для веб-сервера. При отправке веб-серверов с несколькими настройками определение порта и хоста, на котором они работают, необходимо для настройки балансировки нагрузки или псевдонима DNS.

Метод обратного вызова отправляет сообщение на консоль, сообщая нам, когда сервер начал прослушивать соединения.

Теперь у нас есть веб-сервер с менее чем четырнадцатью строками кода. Теперь давайте посмотрим на это в действии и проверим его шаги, запустив программу:

$ node hello.js

В консоли мы получим следующий вывод:

OutputServer is running on http://localhost:8000

Во втором окне терминала мы будем взаимодействовать с сервером через curl, инструмент командной строки для отправки и получения данных по сети. Введите следующую команду, чтобы отправить запрос HTTP GET на наш операционный сервер:

$ curl http://localhost:8000

Когда мы нажимаем ENTER, наш терминал отображает следующую информацию:

ВыводМой веб-сервер!

Мы настроили сервер и получили первый ответ сервера.

Давайте подробнее рассмотрим, что на самом деле делает предыдущий код. Сначала создается названная функция requestListener, которая принимает объект запроса и объект ответа в качестве входных данных.

Объект запроса содержит такую ​​информацию, как запрошенный URL-адрес, который в данном случае мы игнорируем и по-прежнему возвращаем «Hello World».

Объект ответа — это то, как мы передаем заголовки и содержимое ответа человеку, который сделал запрос. В этом случае мы предоставляем код ответа 200 (который указывает на успешный ответ) с содержимым «Hello World». Аналогичные заголовки, такие как content-type, также будут размещены здесь.

Затем http.createServerфункция создает сервер, который вызывает a requestListenerпри получении запроса. В следующей строке server.listen (8000)вызывается listenметод, который указывает серверу прослушивать запросы пользователей на определенном порту, в данном примере 8000.

Вот и все — ваш самый простой HTTP-сервер Node.js.

Прежде чем мы продолжим, давайте выйдем из нашего текущего работающего сервера, нажав CTRL+C. Это останавливает выполнение нашего сервера и возвращает нас к командной строке.

Ответы сервера на большинстве веб-сайтов или API, которые мы используем, редко представляют собой обычный текст. В качестве популярных форматов ответов мы получаем HTML-страницы и данные JSON. Последующие статьи научат нас, как возвращать HTTP-ответы на самые популярные типы данных, встречающиеся в Интернете.

Ссылка: https://medium.com/faun/how-to-build-a-node-js-web-server-using-the-http-module-de21d1759ac4

#node #nodejs #webserver #http

Как создать веб-сервер Node.js с помощью модуля HTTP
加藤  七夏

加藤 七夏

1658579820

如何使用 HTTP 模塊構建 Node.js Web 服務器

當您在瀏覽器上打開網頁時,您會向連接到 Internet 的另一台計算機發送請求,該響應是將網頁返回給您。您通過網絡與之通信的那台機器是網絡服務器。Web 服務器從客戶端(例如您的瀏覽器)接收 HTTP 請求,並以 HTTP 響應(例如來自 API 或 HTML 頁面的 JSON)進行響應。

服務器返回網頁需要大量的軟件。他們將此軟件分為兩部分:前端和後端。前端編程關注材料的呈現方式,例如導航欄顏色和文本樣式。後端代碼涉及數據交換、處理和存儲。

在本文中,您將學習如何使用 Node.jsHTTP模塊創建 Web 服務器。

要求。

  • 確保您的開發計算機已安裝 Node.js。本教程使用 Node.js 12.22.5
  • Node.js 平台便於創建開箱即用的 Web 服務器。首先,請確保您了解 Node.js 的基礎知識。
  • 我們還在其中一個部分中使用了異步編程。

第 1 步:建立一個基本的 HTTP 服務器。

首先構建一個向用戶提供純文本的服務器。這將解釋構建服務器所需的基本思想,這將為返回更複雜的數據形式(例如 JSON)奠定基礎。

首先,我們必須創建一個可訪問的編碼環境,以完成我們和本文中的其他活動。創建一個web-server在終端中調用的文件夾:

$ mkdir web-server

然後導航到該文件夾:

$ cd web-server

製作以下文件來存放代碼:

$ touch hello.js

在文本編輯器中,打開文件。我們將使用 nano,因為它在終端上很容易獲得:

$ nano hello.js

我們首先加載HTTP包含在所有 Node.js 安裝中的模塊。在hello.js中添加以下行:

web-server/hello.js

const http = require("http");

HTTP模塊有一個創建服務器的方法,我們稍後會看到。

下一步將聲明兩個變量,即我們的服務器將綁定到的主機和端口:

web-server/hello.js

...
const host = 'localhost';
const port = 8000;

如前所述,Web 服務器接收來自瀏覽器和其他用戶的請求。可以通過輸入域名訪問 Web 服務器,DNS 服務器將其轉換為 IP 地址。IP 地址是一個邏輯數字序列,用於標識網絡(例如 Internet)上的機器。

該值localhost是計算機用來連接自身的私有地址。它通常等於本地計算機的內部 IP 地址,127.0.0.1並且只能由本地計算機訪問。

端口是服務器用來連接到我們的 IP 地址作為端點或“門”的值。在本例中,我們將在端口上運行我們的 Web 服務器8000。端口80808000通常用作開發中的默認端口,並且在大多數情況下,開發人員會將它們用於 HTTP 服務器。

當我們將服務器連接到這個主機和端口時,我們可以通過在本地瀏覽器中訪問http://localhost:8000來訪問它。

讓我們在 Node.js 中創建一個稱為請求偵聽器的特定函數。該函數負責處理傳入的 HTTP 請求並返回 HTTP 響應。這個函數必須有兩個輸入:一個請求對象和一個響應對象。請求對象存儲來自傳入 HTTP 請求的所有數據。響應對象負責向服務器發送 HTTP 回复。

當有人到達我們的網絡服務器時,我們希望它傳遞消息“ My Web Server!

接下來,讓我們添加該函數:

web-server/hello.js

...const requestListener = function (req, res) {
    res.writeHead(200);
    res.end("My Web Server!");
};

該函數通常以它的作用命名。例如,如果我們編寫了一個返回名稱列表的請求偵聽器函數,我們可能會調用它listNames ()。因為這是一個示例場景,我們將其稱為requestListener

在 Node.js 中,所有請求偵聽器方法都採用兩個參數:reqres(如果我們願意,我們可以不同地命名它們)。用戶的 HTTP 請求存儲在 Request 對像中,該對像對應於第一個參數req. 我們通過與第二個參數中的 Response 對象交互來創建發送給用戶的 HTTP 響應res

在第一行中res.writeHead(200);,確定響應的 HTTP 狀態代碼。HTTP 狀態代碼顯示服務器處理 HTTP 請求的成功程度。在這種情況下,狀態碼200等同“OK”於。

該函數的下一行,res.end (“My Web Server!”);將 HTTP 回復返回給請求它的客戶端。此方法從服務器檢索任何可用數據。它在這種情況下返回文本數據。

最後,我們可以構建我們的服務器並使用我們的請求監聽器:

web-server/hello.js

const http = require('http');  const requestListener = function (req, res) {
   res.writeHead(200);
   res.end('Hello, World!');
 }const server = http.createServer(requestListener); server.listen(8000);

通過使用CTRL+X,您可以保存和離開nano

在第一行中,我們使用http模塊的createServer()函數來創建一個新server對象。該服務器處理 HTTP 請求並將它們轉發到我們的requestListener()方法。

在我們構建了我們的服務器之後,我們需要為它提供一個網絡地址。這就是我們使用該server. listen ()方法所做的事情。它接受三個參數:porthost和一個在服務器開始偵聽時調用的回調函數。

這些參數中的每一個都是可選的,但指定我們希望將哪個端口和主機用於 Web 服務器是一個明智的選擇。在將 Web 服務器發送到多個設置時,識別它們正在運行的端口和主機對於配置負載平衡或 DNS 別名至關重要。

回調方法向控制台發送一條消息,讓我們知道服務器何時開始偵聽連接。

我們現在有一個不到 14 行代碼的 Web 服務器。現在讓我們看看它的運行情況並通過運行程序來完成它的步伐:

$ node hello.js

我們將在控制台上得到以下輸出:

OutputServer is running on http://localhost:8000

在第二個終端窗口中,我們將通過 與服務器交互,這curl是一個用於通過網絡發送和接收數據的命令行工具。輸入以下命令向我們的操作服務器發送 HTTP GET 請求:

$ curl http://localhost:8000

當我們 push 時ENTER,我們的終端顯示以下信息:

輸出我的網絡服務器!

我們現在已經配置了一個服務器並收到了我們的第一個服務器響應。

讓我們仔細看看前面的代碼到底在做什麼。首先,構造一個名為requestListener的函數,它接受一個請求對象和一個響應對像作為輸入。

請求對象包含諸如請求的 URL 之類的信息,在這種情況下我們忽略了這些信息,但仍然返回“Hello World”。

響應對像是我們如何將響應的標頭和內容傳輸回發出請求的人。在本例中,我們提供了一個 200 響應代碼(表示響應成功),其內容為“Hello World”。類似的標題,例如content-type,也將放置在此處。

然後該函數構建一個服務器,該服務器在收到請求時http.createServer調用 a 。requestListener接下來的行server.listen (8000)調用該listen方法,該方法指示服務器在特定端口上偵聽用戶請求,在本例中為8000.

就是這樣——你最基本的 Node.js HTTP 服務器。

在我們繼續之前,讓我們通過點擊退出我們當前正在運行的服務器CTRL+C。這會停止我們的服務器的執行並將我們返回到命令行提示符。

我們使用的大多數網站或 API 上的服務器答案很少是純文本的。作為流行的響應格式,我們接收 HTML 頁面和 JSON 數據。後續文章將教我們如何將 HTTP 回復返回到 Internet 上遇到的最流行的數據類型。

鏈接:https ://medium.com/faun/how-to-build-a-node-js-web-server-using-the-http-module-de21d1759ac4

#node #nodejs #webserver #http

如何使用 HTTP 模塊構建 Node.js Web 服務器
Dang  Tu

Dang Tu

1658568972

Cách Xây Dựng Máy Chủ Web Node.js Bằng Cách Sử Dụng Mô-đun HTTP

Khi bạn mở một trang web trên trình duyệt của mình, bạn sẽ gửi một yêu cầu đến một máy tính khác được kết nối với internet, phản hồi này sẽ trả lại trang web cho bạn. Máy mà bạn đang giao tiếp qua web là một máy chủ web. Máy chủ web nhận các yêu cầu HTTP từ máy khách, chẳng hạn như trình duyệt của bạn và trả lời bằng phản hồi HTTP, chẳng hạn như JSON từ một trang API hoặc HTML.

Một lượng lớn phần mềm được yêu cầu để máy chủ trả về một trang web. Họ phân loại phần mềm này thành hai phần: front-end và back-end. Lập trình front-end quan tâm đến cách trình bày tài liệu, như màu sắc của thanh điều hướng và kiểu dáng văn bản. Mã back-end liên quan đến việc trao đổi, xử lý và lưu trữ dữ liệu.

Trong bài viết này, bạn sẽ học cách tạo máy chủ web bằng HTTPmô-đun Node.js.

Yêu cầu.

  • Đảm bảo rằng máy tính phát triển của bạn đã được cài đặt Node.js. Hướng dẫn này sử dụng Node.js 12.22.5
  • Nền tảng Node.js tạo điều kiện thuận lợi cho việc tạo các máy chủ web ngay lập tức. Để bắt đầu, hãy đảm bảo rằng bạn hiểu các nguyên tắc cơ bản của Node.js.
  • Chúng tôi cũng đã sử dụng lập trình không đồng bộ trong một phần của chúng tôi.

Bước 1: Thiết lập một máy chủ HTTP cơ bản.

Bắt đầu bằng cách xây dựng một máy chủ cung cấp văn bản thuần túy cho người dùng. Điều này sẽ giải thích những ý tưởng cơ bản cần thiết để xây dựng một máy chủ, điều này sẽ đặt nền tảng cho việc trả về các biểu mẫu dữ liệu phức tạp hơn, chẳng hạn như JSON.

Đầu tiên, chúng ta phải tạo ra một môi trường mã hóa có thể truy cập để hoàn thành các hoạt động của chúng ta và những hoạt động khác trong bài viết này. Tạo một thư mục được gọi web-servertrong terminal:

$ mkdir web-server

Sau đó, điều hướng đến thư mục đó:

$ cd web-server

Tạo tệp sau để chứa mã:

$ touch hello.js

Trong trình soạn thảo văn bản, hãy mở tệp. Chúng tôi sẽ sử dụng nano vì nó luôn có sẵn tại nhà ga:

$ nano hello.js

Chúng tôi bắt đầu bằng cách tải HTTPmô-đun, được bao gồm trong tất cả các cài đặt Node.js. Để hello.js , hãy thêm dòng sau:

web-server/hello.js

const http = require("http");

HTTP-đun có một phương thức để tạo máy chủ mà chúng ta sẽ thấy ở phần sau.

Bước sau đây sẽ là khai báo hai biến, máy chủ và cổng mà máy chủ của chúng tôi sẽ bị ràng buộc:

web-server/hello.js

...
const host = 'localhost';
const port = 8000;

Các máy chủ web, như đã nêu trước đây, nhận yêu cầu từ các trình duyệt và những người dùng khác. Có thể truy cập máy chủ web bằng cách nhập tên miền mà máy chủ DNS chuyển đổi thành địa chỉ IP. Địa chỉ IP là một chuỗi số logic xác định một máy trên mạng, chẳng hạn như internet.

Giá trị localhostlà địa chỉ riêng được máy tính sử dụng để kết nối với chính chúng. Nó thường bằng địa chỉ IP nội bộ của 127.0.0.1và chỉ máy tính cục bộ mới có thể truy cập được.

Cổng là một giá trị mà máy chủ sử dụng để kết nối với địa chỉ IP của chúng tôi như một điểm cuối hay còn gọi là “cửa”. Trong ví dụ này, chúng tôi sẽ chạy máy chủ web của mình trên cổng 8000. Các cổng 80808000thường được sử dụng làm cổng mặc định trong quá trình phát triển và trong hầu hết các trường hợp, các nhà phát triển sẽ sử dụng chúng cho các máy chủ HTTP.

Khi chúng tôi kết nối máy chủ của mình với máy chủ và cổng này, chúng tôi có thể truy cập nó bằng cách truy cập http: // localhost: 8000 trong trình duyệt cục bộ.

Hãy tạo một hàm cụ thể được gọi là trình nghe yêu cầu trong Node.js. Chức năng này chịu trách nhiệm xử lý các yêu cầu HTTP đến và trả về các phản hồi HTTP. Phải có hai đầu vào cho chức năng này: một đối tượng yêu cầu và một đối tượng phản hồi. Đối tượng yêu cầu lưu trữ tất cả dữ liệu từ yêu cầu HTTP đến. Đối tượng phản hồi chịu trách nhiệm gửi trả lời HTTP đến máy chủ.

Khi ai đó truy cập vào máy chủ web của chúng tôi, chúng tôi muốn nó gửi thông điệp “ My Web Server!

Tiếp theo, hãy thêm chức năng đó:

web-server/hello.js

...const requestListener = function (req, res) {
    res.writeHead(200);
    res.end("My Web Server!");
};

Hàm thường được đặt tên theo chức năng của nó. Ví dụ: nếu chúng tôi đã viết một hàm lắng nghe yêu cầu trả về một danh sách tên, chúng tôi có thể sẽ gọi nó listNames (). Bởi vì đây là một kịch bản ví dụ, chúng tôi sẽ gọi nó requestListener.

Trong Node.js, tất cả các phương thức lắng nghe yêu cầu có hai đối số: reqres(chúng ta có thể đặt tên khác nếu muốn). Yêu cầu HTTP của người dùng được lưu trữ trong một đối tượng Yêu cầu, tương ứng với tham số đầu tiên req,. Chúng tôi tạo phản hồi HTTP mà chúng tôi gửi cho người dùng bằng cách tương tác với đối tượng Phản hồi trong tham số thứ hai res,.

Trong dòng đầu tiên res.writeHead(200);, xác định mã trạng thái HTTP của phản hồi. Mã trạng thái HTTP cho biết máy chủ đã xử lý yêu cầu HTTP thành công như thế nào. Mã trạng thái 200tương đương với “OK”trong trường hợp này.

Dòng sau của hàm res.end (“My Web Server!”);, trả về phản hồi HTTP cho khách hàng đã yêu cầu nó. Phương pháp này lấy bất kỳ dữ liệu có sẵn nào từ máy chủ. Nó đang trả về dữ liệu văn bản trong trường hợp này.

Cuối cùng, chúng tôi có thể xây dựng máy chủ của mình và sử dụng trình xử lý yêu cầu của chúng tôi:

web-server/hello.js

const http = require('http');  const requestListener = function (req, res) {
   res.writeHead(200);
   res.end('Hello, World!');
 }const server = http.createServer(requestListener); server.listen(8000);

Bằng cách sử dụng CTRL+X, bạn có thể lưu và rời đi nano.

Trong dòng đầu tiên, chúng tôi sử dụng chức năng httpcủa mô-đun createServer()để tạo một serverđối tượng mới. Máy chủ này xử lý các yêu cầu HTTP và chuyển tiếp chúng tới requestListener()phương thức của chúng tôi.

Sau khi xây dựng xong máy chủ, chúng tôi cần cung cấp cho nó một địa chỉ mạng. Đó là những gì chúng tôi làm với server. listen ()phương pháp. Nó cần ba đối số:, porthostmột hàm gọi lại được gọi khi máy chủ bắt đầu lắng nghe.

Mỗi một trong số các tham số này là tùy chọn, nhưng đó là một tùy chọn thông minh để chỉ định cổng và máy chủ lưu trữ nào mà chúng tôi muốn sử dụng cho máy chủ web. Trong khi gửi các máy chủ web đến nhiều cài đặt, việc xác định cổng và máy chủ lưu trữ mà chúng đang hoạt động là điều cần thiết để định cấu hình cân bằng tải hoặc bí danh DNS.

Phương thức gọi lại sẽ gửi một thông báo đến bảng điều khiển, cho chúng tôi biết khi nào máy chủ bắt đầu lắng nghe các kết nối.

Bây giờ chúng tôi có một máy chủ web với ít hơn mười bốn dòng mã. Bây giờ chúng ta hãy xem nó hoạt động và vượt qua các bước của nó bằng cách chạy chương trình:

$ node hello.js

Chúng tôi sẽ nhận được kết quả sau trên bảng điều khiển:

OutputServer is running on http://localhost:8000

Trong cửa sổ đầu cuối thứ hai, chúng ta sẽ giao tiếp với máy chủ curl, một công cụ dòng lệnh để gửi và nhận dữ liệu qua mạng. Nhập lệnh sau để gửi yêu cầu HTTP GET đến máy chủ điều hành của chúng tôi:

$ curl http://localhost:8000

Khi chúng tôi đẩy ENTER, thiết bị đầu cuối của chúng tôi hiển thị các thông tin sau:

Máy chủ Web OutputMy!

Bây giờ chúng tôi đã định cấu hình một máy chủ và nhận được phản hồi máy chủ đầu tiên của chúng tôi.

Chúng ta hãy xem xét kỹ hơn những gì mã trước đó thực sự đang làm. Đầu tiên, một hàm có tên requestListenerđược xây dựng, nó chấp nhận một đối tượng yêu cầu và một đối tượng phản hồi làm đầu vào.

Đối tượng yêu cầu chứa thông tin như URL được yêu cầu, chúng tôi bỏ qua trong trường hợp này và vẫn trả về “Hello World”.

Đối tượng phản hồi là cách chúng tôi truyền tiêu đề và nội dung của phản hồi trở lại người đưa ra yêu cầu. Trong trường hợp này, chúng tôi cung cấp mã phản hồi 200 (cho biết phản hồi thành công) với nội dung “Xin chào thế giới”. Các tiêu đề tương tự, chẳng hạn như content-type, cũng sẽ được đặt ở đây.

Sau http.createServerđó, hàm sẽ xây dựng một máy chủ gọi a requestListenerkhi nhận được yêu cầu. Dòng sau đó server.listen (8000)gọi listenphương thức, phương thức này hướng máy chủ lắng nghe các yêu cầu của người dùng trên một cổng cụ thể, trong ví dụ này 8000,.

Vậy là xong - máy chủ HTTP Node.js cơ bản nhất của bạn.

Trước khi tiếp tục, chúng ta hãy thoát khỏi máy chủ hiện đang hoạt động của mình bằng cách nhấn CTRL+C. Thao tác này sẽ tạm dừng quá trình thực thi của máy chủ của chúng tôi và đưa chúng tôi trở lại dấu nhắc dòng lệnh.

Câu trả lời của máy chủ trên hầu hết các trang web hoặc API mà chúng tôi sử dụng hiếm khi ở dạng văn bản thuần túy. Là các định dạng phản hồi phổ biến, chúng tôi nhận được các trang HTML và dữ liệu JSON. Các bài viết tiếp theo sẽ hướng dẫn chúng ta cách trả lại các phản hồi HTTP cho các kiểu dữ liệu phổ biến nhất gặp phải trên internet.

Liên kết: https://medium.com/faun/how-to-build-a-node-js-web-server-using-the-http-module-de21d1759ac4

#node #nodejs #webserver #http

Cách Xây Dựng Máy Chủ Web Node.js Bằng Cách Sử Dụng Mô-đun HTTP
Thierry  Perret

Thierry Perret

1658558153

Comment Créer Un Serveur Web Node.js à L'aide Du Module HTTP

Lorsque vous ouvrez une page Web sur votre navigateur, vous envoyez une demande à un autre ordinateur connecté à Internet, dont la réponse est de vous renvoyer la page Web. Cette machine avec laquelle vous communiquez via le Web est un serveur Web. Un serveur Web reçoit les requêtes HTTP des clients, tels que votre navigateur, et répond avec une réponse HTTP, telle que JSON à partir d'une API ou d'une page HTML.

Une grande quantité de logiciels est nécessaire pour qu'un serveur renvoie une page Web. Ils classent ce logiciel en deux parties : front-end et back-end. La programmation frontale concerne la manière dont le matériel est présenté, comme la couleur de la barre de navigation et le style du texte. Le code back-end est impliqué dans l'échange, le traitement et le stockage des données.

Dans cet article, vous apprendrez à créer des serveurs Web avec le HTTPmodule Node.js.

Conditions.

  • Assurez-vous que Node.js est installé sur votre ordinateur de développement. Ce tutoriel utilise Node.js 12.22.5
  • La plate-forme Node.js facilite la création de serveurs Web prêts à l'emploi. Pour commencer, assurez-vous de comprendre les principes fondamentaux de Node.js.
  • Nous avons également utilisé la programmation asynchrone dans une de nos parties.

Étape 1 : Établir un serveur HTTP de base.

Commencez par créer un serveur qui fournit du texte brut à l'utilisateur. Cela expliquera les idées fondamentales nécessaires pour créer un serveur, qui jettera les bases pour renvoyer des formulaires de données plus compliqués, tels que JSON.

Tout d'abord, nous devons créer un environnement de codage accessible dans lequel mener à bien nos activités et les autres de cet article. Créez un dossier appelé web-serverdans le terminal :

$ mkdir web-server

Accédez ensuite à ce dossier :

$ cd web-server

Créez le fichier suivant pour héberger le code :

$ touch hello.js

Dans un éditeur de texte, ouvrez le fichier. Nous utiliserons nano car il est facilement disponible sur le terminal :

$ nano hello.js

Nous commençons par charger le HTTPmodule, qui est inclus avec toutes les installations Node.js. À hello.js , ajoutez la ligne suivante :

web-server/hello.js

const http = require("http");

Le HTTPmodule dispose d'une méthode pour créer le serveur, que nous verrons plus tard.

L'étape suivante consistera à déclarer deux variables, l'hôte et le port auxquels notre serveur sera lié :

web-server/hello.js

...
const host = 'localhost';
const port = 8000;

Les serveurs Web, comme indiqué précédemment, reçoivent des requêtes des navigateurs et d'autres utilisateurs. Un serveur Web peut être atteint en saisissant un nom de domaine, qu'un serveur DNS convertit en une adresse IP. Une adresse IP est une séquence numérique logique qui identifie une machine sur un réseau, comme Internet.

La valeur localhostest une adresse privée utilisée par les ordinateurs pour se connecter à eux-mêmes. Elle est souvent égale à l'adresse IP interne de 127.0.0.1et n'est accessible qu'à l'ordinateur local.

Le port est une valeur que les serveurs utilisent pour se connecter à notre adresse IP en tant que point de terminaison, ou "porte". Dans cet exemple, nous exécuterons notre serveur Web sur le port 8000. Les ports 8080et 8000sont couramment utilisés comme ports par défaut dans le développement et, dans la plupart des cas, les développeurs les utiliseront pour les serveurs HTTP.

Lorsque nous connectons notre serveur à cet hôte et à ce port, nous pouvons y accéder en visitant http://localhost:8000 dans un navigateur local.

Créons une fonction spécifique appelée écouteur de requête dans Node.js. Cette fonction est chargée de gérer les requêtes HTTP entrantes et de renvoyer les réponses HTTP. Il doit y avoir deux entrées pour cette fonction : un objet de requête et un objet de réponse. L'objet de requête stocke toutes les données de la requête HTTP entrante. L'objet de réponse est responsable de l'envoi des réponses HTTP au serveur.

Lorsque quelqu'un accède à notre serveur Web, nous voulons qu'il délivre le message " My Web Server!"

Ensuite, ajoutons cette fonction :

web-server/hello.js

...const requestListener = function (req, res) {
    res.writeHead(200);
    res.end("My Web Server!");
};

La fonction est généralement nommée d'après ce qu'elle fait. Par exemple, si nous écrivions une fonction d'écouteur de requête qui renvoyait une liste de noms, nous l'appellerions probablement listNames (). Comme il s'agit d'un exemple de scénario, nous l'appellerons requestListener.

Dans Node.js, toutes les méthodes d'écouteur de requête prennent deux arguments : reqet res(nous pouvons les nommer différemment si nous le voulons). La requête HTTP de l'utilisateur est stockée dans un objet Request, qui correspond au premier paramètre, req. Nous créons la réponse HTTP que nous envoyons à l'utilisateur en interagissant avec l'objet Response dans le deuxième paramètre, res.

Dans la première ligne, res.writeHead(200);, détermine le code d'état HTTP de la réponse. Les codes d'état HTTP indiquent dans quelle mesure un serveur a traité avec succès une requête HTTP. Le code d'état 200équivaut à “OK”dans ce cas.

La ligne suivante de la fonction, res.end (“My Web Server!”);, renvoie la réponse HTTP au client qui l'a demandée. Cette méthode récupère toutes les données disponibles sur le serveur. Il renvoie des données textuelles dans ce scénario.

Enfin, nous pouvons construire notre serveur et utiliser notre écouteur de requête :

web-server/hello.js

const http = require('http');  const requestListener = function (req, res) {
   res.writeHead(200);
   res.end('Hello, World!');
 }const server = http.createServer(requestListener); server.listen(8000);

En utilisant CTRL+X, vous pouvez enregistrer et quitter nano.

Dans la première ligne, nous utilisons la fonction du httpmodule createServer()pour créer un nouvel serverobjet. Ce serveur gère les requêtes HTTP et les transmet à notre requestListener()méthode.

Après avoir construit notre serveur, nous devrons lui fournir une adresse réseau. C'est ce que nous faisons avec la server. listen ()méthode. Il prend trois arguments : port, hostet une fonction de rappel qui est appelée lorsque le serveur commence à écouter.

Chacun de ces paramètres est facultatif, mais il s'agit d'une option intelligente pour spécifier le port et l'hôte que nous souhaitons utiliser pour un serveur Web. Lors de l'envoi de serveurs Web vers plusieurs paramètres, l'identification du port et de l'hôte sur lesquels ils fonctionnent est essentielle afin de configurer l'équilibrage de charge ou un alias DNS.

La méthode de rappel envoie un message à la console, nous indiquant quand le serveur a commencé à écouter les connexions.

Nous avons maintenant un serveur Web avec moins de quatorze lignes de code. Voyons-le maintenant en action et mettons-le à l'épreuve en exécutant le programme :

$ node hello.js

Nous obtiendrons la sortie suivante sur la console :

OutputServer is running on http://localhost:8000

Dans une deuxième fenêtre de terminal, nous nous interfacerons avec le serveur via curl, un outil de ligne de commande pour envoyer et recevoir des données sur un réseau. Entrez la commande suivante pour envoyer une requête HTTP GET à notre serveur d'exploitation :

$ curl http://localhost:8000

Lorsque nous appuyons sur ENTER, notre terminal affiche les informations suivantes :

SortieMon serveur Web !

Nous avons maintenant configuré un serveur et reçu notre première réponse de serveur.

Examinons de plus près ce que fait réellement le code précédent. Tout d'abord, une fonction nommée requestListenerest construite, qui accepte un objet de requête et un objet de réponse comme entrées.

L'objet de requête contient des informations telles que l'URL demandée, que nous ignorons dans ce cas et renvoyons toujours "Hello World".

L'objet de réponse est la façon dont nous transmettons les en-têtes et le contenu de la réponse à la personne qui a fait la demande. Dans ce cas, nous fournissons un code de réponse 200 (qui indique une réponse réussie) avec le contenu "Hello World". Des en-têtes similaires, tels que content-type, seraient également placés ici.

La http.createServerfonction construit ensuite un serveur qui appelle a requestListenerlorsqu'une demande est reçue. La ligne qui suit server.listen (8000)appelle la listenméthode, qui demande au serveur d'écouter les demandes des utilisateurs sur un port particulier, dans cet exemple, 8000.

C'est tout - votre serveur HTTP Node.js le plus basique.

Avant de continuer, quittons notre serveur en cours d'exploitation en appuyant sur CTRL+C. Cela arrête l'exécution de notre serveur et nous ramène à l'invite de la ligne de commande.

Les réponses du serveur sur la plupart des sites Web ou des API que nous utilisons sont rarement en texte brut. En tant que formats de réponse populaires, nous recevons des pages HTML et des données JSON. Les articles suivants nous apprendront comment renvoyer des réponses HTTP aux types de données les plus populaires rencontrés sur Internet.

Lien : https://medium.com/faun/how-to-build-a-node-js-web-server-using-the-http-module-de21d1759ac4

#node #nodejs #webserver #http

Comment Créer Un Serveur Web Node.js à L'aide Du Module HTTP
Awesome  Rust

Awesome Rust

1649213880

Delta: A Syntax-highlighter for Git & Diff Output Written in Rust

Get Started

Install delta and add this to your ~/.gitconfig:

[core]
    pager = delta

[interactive]
    diffFilter = delta --color-only

[delta]
    navigate = true  # use n and N to move between diff sections

[merge]
    conflictstyle = diff3

[diff]
    colorMoved = default

Delta has many features and is very customizable; please see the user manual.

Features

  • Language syntax highlighting with the same syntax-highlighting themes as bat
  • Word-level diff highlighting using a Levenshtein edit inference algorithm
  • Side-by-side view with line-wrapping
  • Line numbering
  • n and N keybindings to move between files in large diffs, and between diffs in log -p views (--navigate)
  • Improved merge conflict display
  • Improved git blame display (syntax highlighting; --hyperlinks formats commits as links to GitHub/GitLab/Bitbucket etc)
  • Syntax-highlights grep output from rg, git grep, grep, etc
  • Support for Git's --color-moved feature.
  • Code can be copied directly from the diff (-/+ markers are removed by default).
  • diff-highlight and diff-so-fancy emulation modes
  • Commit hashes can be formatted as terminal hyperlinks to the GitHub/GitLab/Bitbucket page (--hyperlinks). File paths can also be formatted as hyperlinks for opening in your OS.
  • Stylable box/line decorations to draw attention to commit, file and hunk header sections.
  • Style strings (foreground color, background color, font attributes) are supported for >20 stylable elements, using the same color/style language as git
  • Handles traditional unified diff output in addition to git output

A syntax-highlighting pager for git, diff, and grep output

Code evolves, and we all spend time studying diffs. Delta aims to make this both efficient and enjoyable: it allows you to make extensive changes to the layout and styling of diffs, as well as allowing you to stay arbitrarily close to the default git/diff output.

image
delta with line-numbers activated
image
delta with side-by-side and line-numbers activated

Here's what git show can look like with git configured to use delta:


 

imageimage
"Dracula" theme"GitHub" theme



 

Syntax-highlighting themes

All the syntax-highlighting color themes that are available with bat are available with delta:


 

imageimage
delta --show-syntax-themes --darkdelta --show-syntax-themes --light


 

Side-by-side view

[User manual]

[delta]
    side-by-side = true

By default, side-by-side view has line-numbers activated, and has syntax highlighting in both the left and right panels: [config]

image

Side-by-side view wraps long lines automatically:

image

Line numbers

[User manual]

[delta]
    line-numbers = true
image

Merge conflicts

[User manual]

image

Git blame

[User manual]

image

Installation and usage

Please see the user manual and delta --help.

Link: https://crates.io/crates/git-delta

#rust  #webserver 

Delta: A Syntax-highlighter for Git & Diff Output Written in Rust
Awesome  Rust

Awesome Rust

1649206500

Data Nymizer: Powerful Database Anonymizer with Flexible Rules

[Data]nymizer

Powerful database anonymizer with flexible rules. Written in Rust.

Datanymizer is created & supported by Evrone. See what else we develop with Rust.

More information you can find in articles in English and Russian.

How it works

Database -> Dumper (+Faker) -> Dump.sql

You can import or process your dump with supported database without 3rd-party importers.

Datanymizer generates database-native dump.

Installation

There are several ways to install pg_datanymizer, choose a more convenient option for you.

Pre-compiled binary

# Linux / macOS / Windows (MINGW and etc). Installs it into ./bin/ by default
$ curl -sSfL https://raw.githubusercontent.com/datanymizer/datanymizer/main/cli/pg_datanymizer/install.sh | sh -s

# Or more shorter way
$ curl -sSfL https://git.io/pg_datanymizer | sh -s

# Specify installation directory and version
$ curl -sSfL https://git.io/pg_datanymizer | sudo sh -s -- -b /usr/local/bin v0.2.0

# Alpine Linux (wget)
$ wget -q -O - https://git.io/pg_datanymizer | sh -s

Homebrew / Linuxbrew

# Installs the latest stable release
$ brew install datanymizer/tap/pg_datanymizer

# Builds the latest version from the repository
$ brew install --HEAD datanymizer/tap/pg_datanymizer

Docker

$ docker run --rm -v `pwd`:/app -w /app datanymizer/pg_datanymizer

Getting started with CLI dumper

First, inspect your database schema, choose fields with sensitive data, and create a config file based on it.

# config.yml
tables:
  - name: markets
    rules:
      name_translations:
        template:
          format: '{"en": "{{_1}}", "ru": "{{_2}}"}'
          rules:
            - words:
                min: 1
                max: 2
            - words:
                min: 1
                max: 2
  - name: franchisees
    rules:
      operator_mail:
        template:
          format: user-{{_1}}-{{_2}}
          rules:
            - random_num: {}
            - email:
                kind: Safe
      operator_name:
        first_name: {}
      operator_phone:
        phone:
          format: +###########
      name_translations:
        template:
          format: '{"en": "{{_1}}", "ru": "{{_2}}"}'
          rules:
            - words:
                min: 2
                max: 3
            - words:
                min: 2
                max: 3
  - name: users
    rules:
      first_name:
        first_name: {}
      last_name:
        last_name: {}
  - name: customers
    rules:
      email:
        template:
          format: user-{{_1}}-{{_2}}
          rules:
            - random_num: {}
            - email:
                kind: Safe
                uniq:  
                  required: true
                  try_count: 5
      phone:
        phone:
          format: +7##########
          uniq: true
      city:
        city: {}
      age:
        random_num:
          min: 10
          max: 99
      first_name:
        first_name: {}
      last_name:
        last_name: {}
      birth_date:
        datetime:
          from: 1990-01-01T00:00:00+00:00
          to: 2010-12-31T00:00:00+00:00

And then start to make dump from your database instance:

pg_datanymizer -f /tmp/dump.sql -c ./config.yml postgres://postgres:postgres@localhost/test_database

It creates new dump file /tmp/dump.sql with native SQL dump for Postgresql database. You can import fake data from this dump into new Postgresql database with command:

psql -U postgres -d new_database < /tmp/dump.sql

Dumper can stream dump to STDOUT like pg_dump and you can use it in other pipelines:

pg_datanymizer -c ./config.yml postgres://postgres:postgres@localhost/test_database > /tmp/dump.sql

Additional options

Tables filter

You can specify which tables you choose or ignore for making dump.

For dumping only public.markets and public.users data.

# config.yml
#...
filter:
  only:
    - public.markets
    - public.users

For ignoring those tables and dump data from others.

# config.yml
#...
filter:
  except:
    - public.markets
    - public.users

You can also specify data and schema filters separately.

This is equivalent to the previous example.

# config.yml
#...
filter:
  data:
    except:
      - public.markets
      - public.users

For skipping schema and data from other tables.

# config.yml
#...
filter:
  schema:
    only:
      - public.markets
      - public.users

For skipping schema for markets table and dumping data only from users table.

# config.yml
#...
filter:
  data:
    only:
      - public.users
  schema:
    except:
      - public.markets

You can use wildcards in the filter section:

  • ? matches exactly one occurrence of any character;
  • * matches arbitrary many (including zero) occurrences of any character.

Dump conditions and limit

You can specify conditions (SQL WHERE statement) and limit for dumped data per table:

# config.yml
tables:
  - name: people
    query:
      # don't dump some rows
      dump_condition: "last_name <> 'Sensitive'"
      # select maximum 100 rows
      limit: 100 

Transform conditions and limit

As the additional option, you can specify SQL conditions that define which rows will be transformed (anonymized):

# config.yml
tables:
  - name: people
    query:
      # don't dump some rows
      dump_condition: "last_name <> 'Sensitive'"
      # preserve original values for some rows
      transform_condition: "NOT (first_name = 'John' AND last_name = 'Doe')"      
      # select maximum 100 rows
      limit: 100

You can use the dump_condition, transform_condition and limit options in any combination (only transform_condition; transform_condition and limit; etc).

Global variables

You can specify global variables available from any template rule.

# config.yml
tables:
  users:
    bio:
      template:
        format: "User bio is {{var_a}}"
    age:
      template:
        format: {{_0 | float * global_multiplicator}}
#...
globals:
  var_a: Global variable 1
  global_multiplicator: 6

Available rules

RuleDescription
emailEmails with different options
ipIP addresses. Supports IPv4 and IPv6
wordsLorem words with different length
first_nameFirst name generator
last_nameLast name generator
cityCity names generator
phoneGenerate random phone with different format
pipelineUse pipeline to generate more complicated values
capitalizeLike filter, it capitalizes input value
templateTemplate engine for generate random text with included rules
digitRandom digit (in range 0..9)
random_numRandom number with min and max options
passwordPassword with different 
length options (support max and min options)
datetimeMake DateTime strings with options (from and to)
more than 70 rules in total... 

For the complete list of rules please refer this document.

Uniqueness

You can specify that result values must be unique (they are not unique by default). You can use short or full syntax.

Short:

uniq: true

Full:

uniq:
  required: true
  try_count: 5

Uniqueness is ensured by re-generating values when they are same. You can customize the number of attempts with try_count (this is an optional field, the default number of tries depends on the rule).

Currently, uniqueness is supported by: email, ip, phone, random_num.

Locales

You can specify the locale for individual rules:

first_name:
  locale: RU

The default locale is EN but you can specify a different default locale:

tables:
  # ........  
default:
  locale: RU

We also support ZH_TW (traditional chinese) and RU (translation in progress).

Referencing row values from templates

You can reference values of other row fields in templates. Use prev for original values and final - for anonymized:

tables:
  - name: some_table
    # You must specify the order of rule execution when using `final`
    rule_order:
      - greeting
      - options
    rules:
      first_name:
        first_name: {}
      greeting:
        template:
          # Keeping the first name, but anonymizing the last name   
          format: "Hello, {{ prev.first_name }} {{ final.last_name }}!"
      options:
        template:
          # Using the anonymized value again   
          format: "{greeting: \"{{ final.greeting }}\"}"

You must specify the order of rule execution when using final with rule_order. All rules not listed will be placed at the beginning (i.e. you must list only rules with final).

Sharing information between rows

We implemented a built-in key-value store that allows information to be exchanged between anonymized rows.

It is available via the special functions in templates.

Take a look at an example:

tables:
  - name: users  
    rules:
      name:
        template:    
          # Save a name to the store as a side effect, the key is `user_names.<USER_ID>` 
          format: "{{ _1 }}{{ store_write(key='user_names.' ~ prev.id, value=_1) }}"
          rules:
            - person_name: {}
  - name: user_operations
    rules:
      user_name:          
        template:
          # Using the saved value again  
          format: "{{ store_read(key='user_names.' ~ prev.user_id) }}"

Supported databases

  •  Postgresql
  •  MySQL or MariaDB (TODO)

Documentation

Download Details:
Author: datanymizer
Source Code: https://github.com/datanymizer/datanymizer
License: MIT License

#rust  #webserver 

Data Nymizer: Powerful Database Anonymizer with Flexible Rules
Awesome  Rust

Awesome Rust

1649199180

Rusty Tags: Create Ctags & Etags for A Cargo Project

rusty-tags

A command line tool that creates tags - for source code navigation by using ctags - for a cargo project, all of its direct and indirect dependencies and the rust standard library.

Prerequisites

  • ctags installed, needs a version with the --recurse flag

On a linux system the package is most likely called exuberant-ctags.

Otherwise you can get the sources directly from here or use the newer and alternative universal-ctags.

Only universal-ctags will add tags for struct fields and enum variants.

Installation

$ cargo install rusty-tags

The build binary will be located at ~/.cargo/bin/rusty-tags.

Usage

Just calling rusty-tags vi or rusty-tags emacs anywhere inside of the cargo project should just work.

After its run a rusty-tags.vi / rusty-tags.emacs file should be beside of the Cargo.toml file.

Additionally every dependency gets a tags file at its source directory, so jumping further to its dependencies is possible.

Rust Standard Library Support

Tags for the standard library are created if the rust source is supplied by defining the environment variable RUST_SRC_PATH.

These tags aren't automatically added to the tags of the cargo project and have to be added manually with the path $RUST_SRC_PATH/rusty-tags.vi or $RUST_SRC_PATH/rusty-tags.emacs.

If you're using rustup you can get the rust source of the currently used compiler version by calling:

$ rustup component add rust-src

And then setting RUST_SRC_PATH inside of e.g. ~/.bashrc.

For rustc >= 1.47.0:

$ export RUST_SRC_PATH=$(rustc --print sysroot)/lib/rustlib/src/rust/library/

For rustc < 1.47.0:

$ export RUST_SRC_PATH=$(rustc --print sysroot)/lib/rustlib/src/rust/src/

Configuration

The current supported configuration at ~/.rusty-tags/config.toml (defaults displayed):

# the file name used for vi tags
vi_tags = "rusty-tags.vi"

# the file name used for emacs tags
emacs_tags = "rusty-tags.emacs"

# the name or path to the ctags executable, by default executables with names
# are searched in the following order: "ctags", "exuberant-ctags", "exctags", "universal-ctags", "uctags"
ctags_exe = ""

# options given to the ctags executable
ctags_options = ""

Vim Configuration

Put this into your ~/.vimrc file:

autocmd BufRead *.rs :setlocal tags=./rusty-tags.vi;/

Or if you've supplied the rust source code by defining RUST_SRC_PATH:

autocmd BufRead *.rs :setlocal tags=./rusty-tags.vi;/,$RUST_SRC_PATH/rusty-tags.vi

And:

autocmd BufWritePost *.rs :silent! exec "!rusty-tags vi --quiet --start-dir=" . expand('%:p:h') . "&" | redraw!

Emacs Configuration

Install counsel-etags.

Create file .dir-locals.el in rust project root (please note the line to set counsel-etags-extra-tags-files is optional):

((nil . ((counsel-etags-update-tags-backend . (lambda (src-dir) (shell-command "rusty-tags emacs")))
         (counsel-etags-extra-tags-files . ("~/third-party-lib/rusty-tags.emacs" "$RUST_SRC_PATH/rusty-tags.emacs"))
         (counsel-etags-tags-file-name . "rusty-tags.emacs"))))

Use M-x counsel-etags-find-tag-at-point for code navigation.

counsel-etags will automatically detect and update tags file in project root. So no extra setup is required.

Sublime Configuration

The plugin CTags uses vi style tags, so calling rusty-tags vi should work.

By default it expects tag files with the name .tags, which can be set in ~/.rusty-tags/config.toml:

vi_tags = ".tags"

Or by calling rusty-tags vi --output=".tags".

MacOS Issues

Mac OS users may encounter problems with the execution of ctags because the shipped version of this program does not support the recursive flag. See this posting for how to install a working version with homebrew.

Cygwin/Msys Issues

If you're running Cygwin or Msys under Windows, you might have to set the environment variable $CARGO_HOME explicitly. Otherwise you might get errors when the tags files are moved.

Download Details:
Author: dan-t
Source Code: https://github.com/dan-t/rusty-tags
License: BSD-3-Clause License

#rust  #webserver 

Rusty Tags: Create Ctags & Etags for A Cargo Project
Awesome  Rust

Awesome Rust

1649191800

Create Rust App: Set Up A Modern Rust+react Web App

Create Rust App

Set up a modern rust+react web app by running one command.

create-rust-app.dev

Requirements

  • tsync
    • cargo install tsync
  • yarn
    • npm i -g yarn
  • Stable rust
    • rustup install stable (nightly is fine too)

Install

cargo install create-rust-app_cli

Quick start

create-rust-app my-todo-app
# .. select backend framework, plugins, etc.
# Code-gen resources for your project
cd ./my-todo-app
create-rust-app
# .. select resource type / properties

Features

1. Project creation

$ create-rust-app <project_name>
  • Run frontend & backend with a single command: cargo fullstack
  • Rust backend
    • One of the following frameworks: actix-web, poem or let us know which one you want to use!
    • Database migrations (using diesel.rs)
    • Sending mail
    • PostgreSQL (but you can easily switch to another one!)
  • React frontend
    • Typescript, with backend type definition generation (via tsync)
    • Routing (via react-router-dom)
    • Typed react-query hooks generation ($ cd my_project && create-rust-app, then select "Generate react-query hooks")
    • Update to latest create-react-app (generated frontend is not ejected from create-react-app)

Available Plugins

  • Auth plugin
    • Add JWT token-based auth with a simple command
    • Session management: restoration of previous session, revoking of refresh tokens
    • Credentials management/recovery
    • Email validation / activation flow
    • Adds frontend UI + react hooks
    • Adds auth service, and user / session models
    • Block your endpoints via Auth guard
    • Follows OWASP security best practices
  • Container plugin
    • Dockerfile to containerize your rust app into a single image
  • Admin Portal plugin
    • View your database via the admin portal (editing functionality coming soon™)
    • A "devbox" on the frontend indicates when the backend is compiling or when the database is not reachable
    • Moreover, the devbox displays when migrations are pending + includes a "run migrations" button
  • Storage plugin
    • Adds Storage extractor which allows you to upload/download files from an S3-compatible object store
    • Seamlessly add single or multiple attachments to your models using Attachment::*!
    • Here are some examples:
      • Adding an avatar to a user in your users table:
let s3_key = Attachment::attach("avatar", "users", user_id, AttachmentData {
    file_name: "image.png",
    data: bytes
})?;
  • Getting the url for the attachment
let storage: Storage // retreive this via the appropriate extractor in your frameowrk of choice
let url = storage.download_uri(s3_key)?;

(note: see Attachment::* and Storage::* for more functionality!)

2. Code-gen to reduce boilerplate

$ cd my_project && create-rust-app
  • CRUD code-gen to reduce boilerplate
    • Scaffolds the db model, endpoints service file, and hooks it up in your /api!
  • react-query hooks generation for frontend
    • Generates a hook for each handler function defined in the services/ folder
    • Edit generated hooks afterwards -- they won't be regenerated unless you delete (or rename) the hook!

Walkthrough

Gif

Contributing

If you're experiencing slow compilation time, make sure there isn't any bloat in the template files (look for node_modules or typescript / parcel caches and delete them).

Download Details:
Author: Wulf
Source Code: https://github.com/Wulf/create-rust-app
License: View license

#rust  #webserver 

Create Rust App: Set Up A Modern Rust+react Web App
Awesome  Rust

Awesome Rust

1649184420

Comtrya: Configuration Management for Localhost & Dotfiles

Comtrya

Want to learn how to use Comtrya? Check the docs.

About

Comtrya is a tool to help provision a fresh OS with the packages and configuration (dotfiles) you need to become productive again.

I'm a serial OS installer, I wipe the OS on my machines every, approx, 30 days. I've primarily relied on SaltStack to automate this, but I've grown frustrated with the mismatch between configuration management and personal provisioning.

I've also tried Ansible, Chef, Puppet, mgmt, and probably anything else you're about to suggest; they all have a flaw that makes it too cumbersome to adopt for the trivial use-case.

Installing

You'll find binaries over on the releases page.

If you're not feeling risk-averse, you can use this one-liner:

curl -fsSL https://get.comtrya.dev | sh

If this doesn't work for your OS and architecture, please open an issue and we'll do our best to support it.

Usage

# Run all manifests within a directory
comtrya <directory with manifests>

# --manifests, or -m, will run a subset of your manifests
comtrya . -m one,two,three

# Show command usage
comtrya --help

# Prints version information
comtrya --version

What's Next?

You should take a look at the issues page to see what's available to contribute. Below is a short list of the major features that are upcoming.

Better Output

Providing a --quiet or --summary option that restricts the output to the run time

Comtrya finished in 12.3s

Installed Packages: 12
Provisioned Files: 34

Async DAG

We're using petgraph to build out the graph, but we're not traversing it in a way that will allow us to concurrently execute manifests at the same depth. This is something I wish to sort out pretty soon.

Config

TODO: Allow manifest directory and variables to be configured in a Comtrya.yaml file. This will allow for comtrya with no arguments to function, as in the initial versions.

Package Provider Enhancements

Currently, we execute arbitrary packager install commands. The provider spec should be enriched to support:

  • List refresh
  • Upgrades
  • Version pinning

Integration tests

We are a bit light on tests at the moment, but we have started introducing some helpful plumbing in tests.

Download Details:
Author: comtrya
Source Code: https://github.com/comtrya/comtrya
License: MIT License

#rust  #webserver 

Comtrya: Configuration Management for Localhost & Dotfiles
Awesome  Rust

Awesome Rust

1649177061

Clog Cli: Generate Beautiful Changelogs From Your Git Commit History

clog-cli

A conventional changelog for the rest of us

About

clog creates a changelog automatically from your local git metadata. See the clogs changelog.md for an example.

The way this works, is every time you make a commit, you ensure your commit subject line follows the conventional format. Then when you wish to update your changelog, you simply run clog inside your local repository with any options you'd like to specify.

NOTE: clog also supports empty components by making commit messages such as alias: message or alias(): message (i.e. without the component)

Usage

There are two ways to use clog, as a binary via the command line or as a library in your applications via clog-lib.

Binary (Command Line)

In order to use clog via the command line you must first obtain a binary by either compiling it yourself, or downlading and installing one of the precompiled binaries.

cargo install

If you want to both compile and install clog using cargo you can simply run

cargo install clog-cli

Compiling

Follow these instructions to compile clog, then skip down to Installation.

  1. Ensure you have current version of cargo and Rust installed
  2. Clone the project $ git clone https://github.com/clog-tool/clog-cli && cd clog-cli
  3. Build the project $ cargo build --release
  4. Once complete, the binary will be located at target/release/clog

Using a Precompiled Binary

Currently there are no precompiled binaries available.

Note: The Mac distribution is available on npm via clog-cli.

Installation

Once you have downloaded, or compiled, clog you simply need to place the binary somewhere in your $PATH. If you are not familiar with $PATH read-on; otherwise skip down to Using clog.

Arch Linux

You can use clog-bin from the AUR, or follow the instructions for Linux / OS X

Linux / OS X

You have two options, place clog into a directory that is already located in your $PATH variable (To see which directories those are, open a terminal and type echo "${PATH//:/\n}", the quotation marks are important), or you can add a custom directory to your $PATH

Option 1 If you have write permission to a directory listed in your $PATH or you have root permission (or via sudo), simply copy the clog to that directory # sudo cp clog /usr/local/bin

Option 2 If you do not have root, sudo, or write permission to any directory already in $PATH you can create a directory inside your home directory, and add that. Many people use $HOME/.bin to keep it hidden (and not clutter your home directory), or $HOME/bin if you want it to be always visible. Here is an example to make the directory, add it to $PATH, and copy clog there.

Simply change bin to whatever you'd like to name the directory, and .bashrc to whatever your shell startup file is (usually .bashrc, .bash_profile, or .zshrc)

$ mkdir ~/bin
$ echo "export PATH=$PATH:$HOME/bin" >> ~/.bashrc
$ cp clog ~/bin
$ source ~/.bashrc

Windows

On Windows 7/8 you can add directory to the PATH variable by opening a command line as an administrator and running

C:\> setx path "%path%;C:\path\to\clog\binary"

Otherwise, ensure you have the clog binary in the directory which you operating in the command line from, because Windows automatically adds your current directory to PATH (i.e. if you open a command line to C:\my_project\ to use clog ensure clog.exe is inside that directory as well).

Using clog from the Command Line

clog works by reading your git metadata and specially crafted commit messages and subjects to create a changelog. clog has the following options availble.

USAGE:
    clog [FLAGS] [OPTIONS]

FLAGS:
    -F, --from-latest-tag    use latest tag as start (instead of --from)
    -h, --help               Prints help information
    -M, --major              Increment major version by one (Sets minor and patch to 0)
    -m, --minor              Increment minor version by one (Sets patch to 0)
    -p, --patch              Increment patch version by one
    -V, --version            Prints version information

OPTIONS:
    -C, --changelog <changelog>    A previous changelog to prepend new changes to (this is like
                                   using the same file for both --infile and --outfile and
                                   should not be used in conjuction with either)
    -c, --config <config>          The Clog Configuration TOML file to use (Defaults to
                                   '.clog.toml')**
    -T, --format <format>          The output format, defaults to markdown
                                   (valid values: markdown, json)
    -f, --from <from>              e.g. 12a8546
    -g, --git-dir <gitdir>         Local .git directory (defaults to current dir + '.git')*
    -i, --infile <infile>          A changelog to append to, but *NOT* write to (Useful in
                                   conjunction with --outfile)
    -o, --outfile <outfile>        Where to write the changelog (Defaults to stdout when omitted)
    -r, --repository <repo>        Repository used for generating commit and issue links
                                   (without the .git, e.g. https://github.com/clog-tool/clog-cli)
    -l, --link-style <style>       The style of repository link to generate
                                   (Defaults to github) [values: Github Gitlab Stash]
    -s, --subtitle <subtitle>      e.g. "Crazy Release Title"
    -t, --to <to>                  e.g. 8057684 (Defaults to HEAD when omitted)
        --setversion <ver>         e.g. 1.0.1
    -w, --work-tree <workdir>      Local working tree of the git project
                                   (defaults to current dir)*

* If your .git directory is a child of your project directory (most common, such as
/myproject/.git) AND not in the current working directory (i.e you need to use --work-tree or
--git-dir) you only need to specify either the --work-tree (i.e. /myproject) OR --git-dir (i.e.
/myproject/.git), you don't need to use both.

** If using the --config to specify a clog configuration TOML file NOT in the current working
directory (meaning you need to use --work-tree or --git-dir) AND the TOML file is inside your
project directory (i.e. /myproject/.clog.toml) you do not need to use --work-tree or --git-dir.

Try it!

In order to see it in action, you'll need a repository that already has some of those specially crafted commit messages in it's history. For this, we'll use the clog repository itself.

Clone the repo git clone https://github.com/clog-tool/clog-cli && cd clog-cli

Ensure you already clog binary from any of the steps above

There are many, many ways to run clog. Note, in these examples we will be typing the same options over and over again, in times like that we could a clog TOML configuration file to specify those options that don't normally change. Also note, all these CLI options have short versions as well, we're using the long version because they're easier to understand.

Let's start by picking up only new commits since our last release (this may not be a lot...or none)

Run clog -r https://github.com/clog-tool/clog-cli --outfile only_new.md

By default, clog outputs to stdout unless you have a file set inside a TOML configuration file. (Note, we could have used the shell > operator instead of --outfile)

Anything options you set via the CLI will override anything you set the configuration file.

Let's now tell clog where it can find our old changelog, and prepend any new commits to that old data

Run clog -r https://github.com/clog-tool/clog-cli --infile changelog.md --outfile new_combined.md

Finally, let's assume like most projects we just want to use one file, and prepend all new data to our old changelog (most useful)

First make a backup of the changelog.md so you can compare it later cp changelog.md changelog.md.bak

Run clog -r https://github.com/clog-tool/clog-cli --changelog changelog.md

Try viewing any of the only_new.md, new_combined.md, changelog.md.bak, or changelog.md in your favorite markdown viewer to compare them.

As a Library

See the documentation or clog-lib for information on using clog in your applications. You can also see the clog crates.io page.

Default Options

clog can also be configured using a default configuration file so that you don't have to specify all the options each time you want to update your changelog. To do this add a .clog.toml file to your repository.

[clog]
# A repository link with the trailing '.git' which will be used to generate
# all commit and issue links
repository = "https://github.com/clog-tool/clog-cli"
# A constant release title
subtitle = "my awesome title"

# specify the style of commit links to generate, defaults to "github" if omitted
link-style = "github"

# The preferred way to set a constant changelog. This file will be read for old changelog
# data, then prepended to for new changelog data. It's the equivilant to setting
# both infile and outfile to the same file.
#
# Do not use with outfile or infile fields!
#
# Defaults to stdout when omitted
changelog = "mychangelog.md"

# This sets an output file only! If it exists already, new changelog data will be
# prepended, if not it will be created.
#
# This is useful in conjunction with the infile field if you have a separate file
# that you would like to append after newly created clog data
#
# Defaults to stdout when omitted
outfile = "MyChangelog.md"

# This sets the input file old! Any data inside this file will be appended to any
# new data that clog picks up
#
# This is useful in conjunction with the outfile field where you may wish to read
# from one file and append that data to the clog output in another
infile = "My_old_changelog.md"

# This sets the output format. There are two options "json" or "markdown" and
# defaults to "markdown" when omitted
output-format = "json"

# If you use tags, you can set the following if you wish to only pick
# up changes since your latest tag
from-latest-tag = true

Now you can update your MyChangelog.md with clog --patch (assuming you want to update from the latest tag version, and increment your patch version by 1).

Note: Any options you specify at the command line will override options set in your .clog.toml

Custom Sections

By default, clog will display three sections in your changelog, Features, Performance, and Bug Fixes. You can add additional sections by using a .clog.toml file. To add more sections, simply add a [sections] table, along with the section name and aliases you'd like to use in your commit messages:

[sections]
MySection = ["mysec", "ms"]

Now if you make a commit message such as mysec(Component): some message or ms(Component): some message there will be a new "MySection" section along side the "Features" and "Bug Fixes" areas.

NOTE: Sections with spaces are suppported, such as "My Special Section" = ["ms", "mysec"]

Companion Projects

  • Commitizen - A command line tool that helps you writing better commit messages.

Download Details:
Author: clog-tool
Source Code: https://github.com/clog-tool/clog-cli
License: MIT License

#rust  #webserver 

Clog Cli: Generate Beautiful Changelogs From Your Git Commit History
Awesome  Rust

Awesome Rust

1649169660

See: Simple and Fast Web Server Written in Rust

see

Overview

Simple and fast web server as a single executable with no extra dependencies required.

Features

  • Built with Tokio and Hyper
  • TLS encryption through Rustls
  • HTTP/1 and HTTP/2 support
  • Content compression auto, gzip, deflate or br
  • Rewrite rules for redirection
  • Allow/deny addresses allowing wildcards
  • Location with regex matching
  • Reverse proxy
  • Basic authentication
  • Error handling
  • Customized logs
  • And more

Usage

Quick start in current directory:

see start

or specify the port and directory via parameters:

see start -b 80 -p /root/www

Also, you can use see -c [FILE] to specify a configuration file or just use the default one in ~/.see.conf. Below, a simple configuration example to start the HTTPS server:

server {
    listen 80
    root /root/www
}

server {
    listen 443
    root /root/www
    host example.com
    https {
        key ./ssl.key
        cert ./ssl.pem
    }
}

Documentation

The documentation is available at docs/. Take a look at it to get more information about more configuration options.

Installation

Download the compiled executable corresponding to your system from the release page.

Cargo

cargo install see
# or
cargo install --git https://github.com/wyhaya/see

Docker

docker pull wyhaya/see

Container

Add the following to see.conf:

server {
    listen 80
    echo Hello, world!
}

and run the container:

docker run -idt --name see -p 80:80 -p 443:443 -v '$PWD'/see:/ wyhaya/see

lastly, open the link http://localhost and you should see Hello, world!.

ToDo

  •  Fix docker container (ubuntu, ca-certificates)
  •  Fix the bug of matching https and http on the same port
  •  Support global configuration
  •  Support certificate with password
  •  Daemon for Unix systems and service for Windows

Download Details:
Author: wyhaya
Source Code: https://github.com/wyhaya/see
License: MIT License

#rust  #webserver 

See: Simple and Fast Web Server Written in Rust
Awesome  Rust

Awesome Rust

1649162290

Http Server: Simple Http Server in Rust (Windows/Mac/Linux)

How it looks like?

Command Line Arguments

Simple HTTP(s) Server 0.6.1

USAGE:
    simple-http-server [FLAGS] [OPTIONS] [--] [root]

FLAGS:
        --cors       Enable CORS via the "Access-Control-Allow-Origin" header
    -h, --help       Prints help information
    -i, --index      Enable automatic render index page [index.html, index.htm]
        --nocache    Disable http cache
        --norange    Disable header::Range support (partial request)
        --nosort     Disable directory entries sort (by: name, modified, size)
    -s, --silent     Disable all outputs
    -u, --upload     Enable upload files (multiple select) (CSRF token required)
    -V, --version    Prints version information

OPTIONS:
    -a, --auth <auth>                              HTTP Basic Auth (username:password)
        --cert <cert>                              TLS/SSL certificate (pkcs#12 format)
        --certpass <certpass>                      TLS/SSL certificate password
    -c, --compress <compress>...
            Enable file compression: gzip/deflate
                Example: -c=js,d.ts
                Note: disabled on partial request!
        --ip <ip>                                  IP address to bind [default: 0.0.0.0]
    -p, --port <port>                              Port number [default: 8000]
        --redirect <redirect>                      takes a URL to redirect to using HTTP 301 Moved Permanently
    -t, --threads <threads>                        How many worker threads [default: 3]
        --try-file <PATH>
            serve this file (server root relative) in place of missing files (useful for single page apps) [aliases:
            try-file-404]
    -l, --upload-size-limit <upload_size_limit>    Upload file size limit [bytes] [default: 8000000]

Screenshot

Installation

Download binary

Goto Download

  • windows-64bit
  • osx-64bit
  • linux-64bit

Install by cargo

# Install Rust
curl https://sh.rustup.rs -sSf | sh

# Install simple-http-server
cargo install simple-http-server
rehash
simple-http-server -h

Features

  •  Windows support (with colored log)
  •  Specify listen address (ip, port)
  •  Specify running threads
  •  Specify root directory
  •  Pretty log
  •  Nginx like directory view (directory entries, link, filesize, modfiled date)
  •  Breadcrumb navigation
  •  (default enabled) Guess mime type
  •  (default enabled) HTTP cache control
    • Sending Last-Modified / ETag
    • Replying 304 to If-Modified-Since
  •  (default enabled) Partial request
    • Accept-Ranges: bytes([ByteRangeSpec; length=1])
    • [Range, If-Range, If-Match] => [Content-Range, 206, 416]
  •  (default disabled) Automatic render index page [index.html, index.htm]
  •  (default disabled) Upload file
    • A CSRF token is generated when upload is enabled and must be sent as a parameter when uploading a file
  •  (default disabled) HTTP Basic Authentication (by username:password)
  •  Sort by: filename, filesize, modifled
  •  HTTPS support
  •  Content-Encoding: gzip/deflate
  •  Added CORS headers support
  •  Silent mode

Download Details:
Author: TheWaWaR
Source Code: https://github.com/TheWaWaR/simple-http-server
License: MIT License

#rust  #webserver 

Http Server: Simple Http Server in Rust (Windows/Mac/Linux)
Awesome  Rust

Awesome Rust

1649154900

Http: A Basic HTTP Server for Hosting A Folder Fast and Simply in Rust

http

Host These Things Please - a basic HTTP server for hosting a folder fast and simply

Selected features

See the manpage for full list.

  •  Symlinks followed by default (disableable via -s option)
  •  Index generation for directories
  •  Sane defaults (like hosted dir (.) and port (first free one from range 8000-9999))
  •  Correct MIME type for served files
  •  Handled request methods: OPTIONS, GET, PUT, DELETE, HEAD and TRACE ("writing" methods are off by default, enable via -w switch)
  •  Proper handling of percent-encoded URLs (like асдф fdsa)
  •  Good symlink handling compatible with Windows
  •  Multitude of information in directory indices
  •  Serving index files like index.{html,htm,shtml} from directories (disableable via -i switch)
  •  Drag&Drop to upload files (with -w specified)
  •  Smart encoding of generated and filesystem-originating responses (disableable via -e switch)
  •  Full Range header support
  •  Hosting with an (optional) optionally autogenerated TLS certificate
  •  Arbitrarily nested username/password authentication
  •  Per-request bandwidth cap
  •  Per-extension-overridable MIME-types with reasonable guesses
  •  WebDAV/RFC2518 support, tested with the Linux davfs2 helper, Windows network filesystem support (out-of-box), and the Total Commander WebDAV plugin
  •  RFSAPI support (format spec) (explorable from commandline with D'Oh)

Manpage

Installation

From Cargo

If you have cargo installed (you're a Rust developer) all you need to do is:

cargo install https

Which will install http and httplz (identical, disable one or another if they clash) in the folder where all other binaries go.

From AUR

As provided by @cyqsimon: https://aur.archlinux.org/packages/httplz

From an installer

If, however, you're not a Rust developer, but you have sh-like shell, you can use an installer (works on Windows and Linux):

curl -SsL https://cdn.rawgit.com/thecoshman/http/master/install.sh | sh
# or, if you like taking precautions
sh -c "$(curl -SsL https://cdn.rawgit.com/thecoshman/http/master/install.sh)"

You can change the installation directory by setting the PREFIX environment variable (default - /usr/bin):

PREFIX=$HOME/bin curl -SsL https://cdn.rawgit.com/thecoshman/http/master/install.sh | sh
# Windows:
set PREFIX=D:\Akces
curl -SsL https://cdn.rawgit.com/thecoshman/http/master/install.sh | sh

If you're on a Debian-based amd64 machine, you can also grab a .deb package from the latest release page.

If you're on Windows and prefer a more guided installation (or you don't have a shell), you can download the Windows installer from the latest release's page. (Note: you can add /D INSTALLDIR to installer command line to change the installation directory.)

Aims

The idea is to make a program that can compile down to a simple binary that can be used via Linux CLI to quickly take the current directory and serve it over HTTP. Everything should have sensible defaults such that you do not have to pass parameters like what port to use.

  •  Sub directories would be automatically hosted.
  •  Symlinks will not be followed by default (in my opinion, this is more likely to be a problem than an intended thing).
  •  Root should not be required.
  •  If an index file isn't provided, one will be generated (in memory, no touching the disk, why would you do that you dirty freak you), that will list the current files and folders (and then sub directories will have index files generated as required)
  •  Changes made to files should be reflected instantly, as I don't see why anything would be cached... you request a file, a file will be looked for

It's not going to be a 'production ready' tool, it's a quick and dirty way of hosting a folder, so whilst I'll try to make it secure, it is not going to be a serious goal.

Download Details:
Author: thecoshman
Source Code: https://github.com/thecoshman/http
License: MIT License

#rust  #webserver 

Http: A Basic HTTP Server for Hosting A Folder Fast and Simply in Rust
Awesome  Rust

Awesome Rust

1649147547

Miniserve: A CLI Tool To Serve Files and Dirs Over HTTP in Rust

For when you really just want to serve some files over HTTP right now!

miniserve is a small, self-contained cross-platform CLI tool that allows you to just grab the binary and serve some file(s) via HTTP. Sometimes this is just a more practical and quick way than doing things properly.

Screenshot

Screenshot

How to use

Serve a directory:

miniserve linux-distro-collection/

Serve a single file:

miniserve linux-distro.iso

Set a custom index file to serve instead of a file listing:

miniserve --index test.html

Serve an SPA (Single Page Application) so that non-existent paths are forwarded to the SPA's router instead

miniserve --spa --index index.html

Require username/password:

miniserve --auth joe:123 unreleased-linux-distros/

Require username/password as hash:

pw=$(echo -n "123" | sha256sum | cut -f 1 -d ' ')
miniserve --auth joe:sha256:$pw unreleased-linux-distros/

Generate random 6-hexdigit URL:

miniserve -i 192.168.0.1 --random-route /tmp
# Serving path /private/tmp at http://192.168.0.1/c789b6

Bind to multiple interfaces:

miniserve -i 192.168.0.1 -i 10.13.37.10 -i ::1 /tmp/myshare

Start with TLS:

miniserve --tls-cert my.cert --tls-key my.key /tmp/myshare

Upload a file using curl:

# in one terminal
miniserve -u .
# in another terminal
curl -F "path=@$FILE" http://localhost:8080/upload\?path\=/

(where $FILE is the path to the file. This uses miniserve's default port of 8080)

Take pictures and upload them from smartphones:

miniserve -u -m image -q

This uses the --media-type option, which sends a hint for the expected media type to the browser. Some mobile browsers like Firefox on Android will offer to open the camera app when seeing this.

Features

  • Easy to use
  • Just works: Correct MIME types handling out of the box
  • Single binary drop-in with no extra dependencies required
  • Authentication support with username and password (and hashed password)
  • Mega fast and highly parallel (thanks to Rust and Actix)
  • Folder download (compressed on the fly as .tar.gz or .zip)
  • File uploading
  • Pretty themes (with light and dark theme support)
  • Scan QR code for quick access
  • Shell completions
  • Sane and secure defaults
  • TLS (for supported architectures)

Usage

miniserve 0.19.4

Sven-Hendrik Haase <svenstaro@gmail.com>, Boastful Squirrel <boastful.squirrel@gmail.com>

For when you really just want to serve some files over HTTP right now!

USAGE:
    miniserve [OPTIONS] [--] [PATH]

ARGS:
    <PATH>
            Which path to serve

OPTIONS:
    -a, --auth <AUTH>
            Set authentication. Currently supported formats: username:password, username:sha256:hash,
            username:sha512:hash (e.g. joe:123,
            joe:sha256:a665a45920422f9d417e4867efdc4fb8a04a1f3fff1fa07e998e86f7f7a27ae3)

    -c, --color-scheme <COLOR_SCHEME>
            Default color scheme

            [default: squirrel]
            [possible values: squirrel, archlinux, zenburn, monokai]

    -d, --color-scheme-dark <COLOR_SCHEME_DARK>
            Default color scheme

            [default: archlinux]
            [possible values: squirrel, archlinux, zenburn, monokai]

    -D, --dirs-first
            List directories first

    -F, --hide-version-footer
            Hide version footer

    -g, --enable-tar-gz
            Enable gz-compressed tar archive generation

    -h, --help
            Print help information

    -H, --hidden
            Show hidden files

        --header <HEADER>
            Set custom header for responses

    -i, --interfaces <INTERFACES>
            Interface to listen on

        --index <index_file>
            The name of a directory index file to serve, like "index.html"

            Normally, when miniserve serves a directory, it creates a listing for that directory.
            However, if a directory contains this file, miniserve will serve that file instead.

    -l, --show-symlink-info
            Show symlink info

    -m, --media-type <MEDIA_TYPE>
            Specify uploadable media types

            [possible values: image, audio, video]

    -M, --raw-media-type <MEDIA_TYPE_RAW>
            Directly specify the uploadable media type expression

    -o, --overwrite-files
            Enable overriding existing files during file upload

    -p, --port <PORT>
            Port to use

            [default: 8080]

    -P, --no-symlinks
            Do not follow symbolic links

        --print-completions <shell>
            Generate completion file for a shell

            [possible values: bash, elvish, fish, powershell, zsh]

        --print-manpage
            Generate man page

    -q, --qrcode
            Enable QR code display

    -r, --enable-tar
            Enable uncompressed tar archive generation

        --random-route
            Generate a random 6-hexdigit route

        --route-prefix <ROUTE_PREFIX>
            Use a specific route prefix

        --spa
            Activate SPA (Single Page Application) mode

            This will cause the file given by --index to be served for all non-existing file paths. In
            effect, this will serve the index file whenever a 404 would otherwise occur in order to
            allow the SPA router to handle the request instead.

    -t, --title <TITLE>
            Shown instead of host in page title and heading

        --tls-cert <TLS_CERT>
            TLS certificate to use

        --tls-key <TLS_KEY>
            TLS private key to use

    -u, --upload-files
            Enable file uploading

    -v, --verbose
            Be verbose, includes emitting access logs

    -V, --version
            Print version information

    -W, --show-wget-footer
            If enabled, display a wget command to recursively download the current directory

    -z, --enable-zip
            Enable zip archive generation

            WARNING: Zipping large directories can result in out-of-memory exception because zip
            generation is done in memory and cannot be sent on the fly

How to install

Packaging status

On Linux: Download miniserve-linux from the releases page and run

chmod +x miniserve-linux
./miniserve-linux

Alternatively, if you are on Arch Linux, you can do

pacman -S miniserve

On Termux

pkg install miniserve

On OSX: Download miniserve-osx from the releases page and run

chmod +x miniserve-osx
./miniserve-osx

Alternatively install with Homebrew:

brew install miniserve
miniserve

On Windows: Download miniserve-win.exe from the releases page and run

miniserve-win.exe

Alternatively install with Scoop:

scoop install miniserve

With Cargo: Make sure you have a recent version of Rust. Then you can run

cargo install --locked miniserve
miniserve

With Docker: Make sure the Docker daemon is running and then run

docker run -v /tmp:/tmp -p 8080:8080 --rm -it docker.io/svenstaro/miniserve /tmp

With Podman: Just run

podman run -v /tmp:/tmp -p 8080:8080 --rm -it docker.io/svenstaro/miniserve /tmp

Shell completions

If you'd like to make use of the built-in shell completion support, you need to run miniserve --print-completions <your-shell> and put the completions in the correct place for your shell. A few examples with common paths are provided below:

# For bash
miniserve --print-completions bash > ~/.local/share/bash-completion/completions/miniserve
# For zsh
miniserve --print-completions zsh > /usr/local/share/zsh/site-functions/_miniserve
# For fish
miniserve --print-completions fish > ~/.config/fish/completions/miniserve.fish

systemd

A hardened systemd-compatible unit file can be found in packaging/miniserve@.service. You could install this to /etc/systemd/system/miniserve@.service and start and enable miniserve as a daemon on a specific serve path /my/serve/path like this:

systemctl enable --now miniserve@-my-serve-path

Keep in mind that you'll have to use systemd-escape to properly escape a path for this usage.

In case you want to customize the particular flags that miniserve launches with, you can use

systemctl edit miniserve@-my-serve-path

and set the [Service] part in the resulting override.conf file. For instance:

[Service]
ExecStart=/usr/bin/miniserve --enable-tar --enable-zip --no-symlinks --verbose -i ::1 -p 1234 --title hello --color-scheme monokai --color-scheme-dark monokai -- %I

Make sure to leave the %I at the very end in place or the wrong path might be served. You might additionally have to override IPAddressAllow and IPAddressDeny if you plan on making miniserve directly available on a public interface.

Binding behavior

For convenience reasons, miniserve will try to bind on all interfaces by default (if no -i is provided). It will also do that if explicitly provided with -i 0.0.0.0 or -i ::. In all of the aforementioned cases, it will bind on both IPv4 and IPv6. If provided with an explicit non-default interface, it will ONLY bind to that interface. You can provide -i multiple times to bind to multiple interfaces at the same time.

Why use this over alternatives?

  • darkhttpd: Not easily available on Windows and it's not as easy as download-and-go.
  • Python built-in webserver: Need to have Python installed, it's low performance, and also doesn't do correct MIME type handling in some cases.
  • netcat: Not as convenient to use and sending directories is somewhat involved.

Releasing

This is mostly a note for me on how to release this thing:

  • Make sure CHANGELOG.md is up to date.
  • cargo release <version>
  • cargo release --execute <version>
  • Releases will automatically be deployed by Github Actions.
  • Docker images will automatically be built by Docker Hub.
  • Update Arch package.

Download Details:
Author: svenstaro
Source Code: https://github.com/svenstaro/miniserve
License: MIT License

#rust  #webserver 

Miniserve: A CLI Tool To Serve Files and Dirs Over HTTP in Rust