使用 grsync 备份 Ubuntu 系统

作为 Linux 用户,您听说过 rsync 命令行实用程序,该实用程序广泛用于使用不同的命令在 Linux 系统上创建文件备份。但是,还有一个基于 GUI 的工具,它基于名为grsync的 rsync 实用程序。它是一个基于 rsync 的开源轻量级实用程序,允许用户从 Linux 桌面创建系统备份。它的创建是为了简化整个备份系统,并且无需使用复杂的命令即可备份文件和目录。

在本指南中,我们将演示使用grsync备份系统的过程。

如何使用 grsync 备份 Ubuntu 系统

grsync是一个跨平台实用程序,已在 Ubuntu 源代码库中提供。您可以通过运行以下命令从存储库安装此工具:

sudo apt-get install grsync

安装过程完成后,您可以使用下面给出的命令运行该工具,默认界面如下所示:

grsync

您还可以通过在应用程序菜单中搜索从 GUI 启动它:

提供三个不同的选项,基本高级额外选项。所有这些选项都是不言自明的,当您将光标悬停在它们上面时,与这些选项相关的信息将出现在您的屏幕上。

要备份系统,您需要在基本选项下输入源路径和目标路径。在这里,我将下载目录的备份创建到视频中。

注意:您可以将硬盘驱动器或 USB 连接到您的 Linux 系统并选择它作为目的地。

在Advanced Options下,为您的备份选择不同的可用选项。

注意:这完全取决于您,您要选择哪个选项。您可以保留它们并使用默认值。

您可以通过单击顶部菜单中的播放按钮来启动备份过程:

该过程完成后,“成功完成”消息将出现在您的屏幕上:

至此,Ubuntu系统备份成功。您可以通过运行 ls 命令查看目标目录中的备份文件:

如何从 Ubuntu 中删除 grsync

如果你想从你的 ubuntu 中删除 grsync,你可以在你的终端中执行以下命令:

sudo apt-get remove grsync

结论

grsync是 rsync 的前端工具,用于备份个人文件和目录。它是 Ubuntu 系统的一个强大的备份应用程序,可以直接从 apt 命令安装。它支持 rsync 的大部分重要功能,并为您的 Linux 桌面提供了一个用户友好的界面。您只需选择源目录和目标目录即可执行备份过程,然后点击播放按钮开始备份。

文章原文出处:https: //linuxhint.com/

#ubuntu #system #using 

使用 grsync 备份 Ubuntu 系统

Резервное копирование системы Ubuntu с помощью grsync

Как пользователь Linux, вы слышали об утилите командной строки rsync, которая широко используется для создания резервных копий файлов в системе Linux с помощью различных команд. Однако существует также инструмент с графическим интерфейсом, основанный на утилите rsync, которая называется grsync . Это легкая утилита на основе rsync с открытым исходным кодом, которая позволяет пользователям создавать резервную копию системы с рабочего стола Linux. Он был создан для упрощения всей системы резервного копирования и позволяет создавать резервные копии файлов и каталогов без использования сложных команд.

В этом руководстве мы продемонстрируем процедуру резервного копирования системы с помощью grsync .

Как сделать резервную копию системы Ubuntu с помощью grsync

grsync — это кроссплатформенная утилита, которая уже доступна в исходном репозитории Ubuntu. Вы можете установить этот инструмент из репозитория, выполнив следующую команду:

sudo apt-get install grsync

Когда процесс установки будет завершен, вы можете запустить инструмент с помощью приведенной ниже команды, и интерфейс по умолчанию будет выглядеть следующим образом:

grsync

Вы также можете запустить его из графического интерфейса, выполнив поиск в меню приложения:

Доступны три различных варианта: базовый , расширенный и дополнительный . Все эти параметры говорят сами за себя, когда вы наводите на них курсор, информация, связанная с этими параметрами, появляется на вашем экране.

Для резервного копирования системы вам потребуется ввести исходный и конечный пути в основных параметрах. Здесь я создаю резервную копию каталога Download в Videos.

Примечание . Вы можете подключить жесткий диск или USB к вашей системе Linux и выбрать его в качестве места назначения.

В разделе «Дополнительные параметры» выберите различные доступные параметры резервного копирования.

Примечание . Какой вариант вы выберете, зависит только от вас. Вы можете оставить их и использовать по умолчанию.

Вы можете запустить процесс резервного копирования, нажав кнопку Play в верхнем меню:

Когда процесс завершится, на экране появится сообщение « Выполнено успешно »:

На данный момент резервное копирование успешно выполняется в системе Ubuntu. Вы можете просмотреть файлы резервных копий в каталоге назначения, выполнив команду ls:

Как удалить grsync из Ubuntu

Если вы хотите удалить grsync из своей Ubuntu, вы можете выполнить следующую команду в своем терминале:

sudo apt-get remove grsync

Заключение

grsync — это интерфейсный инструмент rsync, который используется для резервного копирования личных файлов и каталогов. Это мощное приложение для резервного копирования для системы Ubuntu, которое можно установить непосредственно из команды apt. Он поддерживает большинство важных функций rsync и предоставляет удобный интерфейс для ваших рабочих столов Linux. Вам просто нужно выбрать исходный и целевой каталог для выполнения процесса резервного копирования, а затем нажать кнопку «Воспроизвести» , чтобы начать резервное копирование.

Оригинальный источник статьи: https://linuxhint.com/

#ubuntu #system #using 

Резервное копирование системы Ubuntu с помощью grsync
Nat  Grady

Nat Grady

1679669047

Back Up Ubuntu System Using grsync

As a Linux user, you have heard about the rsync command-line utility that is widely used to create files backup on a Linux system using different commands. However, there is a GUI-based tool as well based on the rsync utility called grsync. It’s an open-source lightweight rsync-based utility that allows users to create a system backup from a Linux desktop. It was created to simplify the entire backup system and has the power to back up files and directories without using complex commands.

In this guide, we will demonstrate the procedure of backing up the system using grsync.

How to Back Up Ubuntu System Using grsync

grsync is a cross-platform utility, which is already available in the Ubuntu source repository. You can install this tool from the repository by running the following command:

sudo apt-get install grsync

When the installation process is finished, you can run the tool with the below-given command and the default interface will look like this:

grsync

You can also launch it from the GUI by searching it in the application menu:

There are three different options available, Basic, Advanced, and Extra options. All these options are self-explanatory, when you hover the cursor over them the information related to these options will appear on your screen.

For backing up the system you will need to enter the source and destination paths under the Basic options. Here I am creating the backup of the Download’s directory into the Videos.

Note: You can connect a hard drive or USB to your Linux system and choose it as a destination.

Under Advanced Options, choose the different available options for your backup.

Note: It’s entirely up to you, which option you are going to pick. You can leave them and go with the default one.

You can start the backup process by clicking the Play button from the top menu:

Once the process is finished, the “Completed successfully” message will appear on your screen:

At this point, the backup is successfully performed on the Ubuntu system. You can view the backup files in the Destination directory by running the ls command:

How to Remove grsync From Ubuntu

If you want to remove grsync from your ubuntu you can execute the following command in your terminal:

sudo apt-get remove grsync

Conclusion

grsync is a front-end tool of rsync that is used to backup personal files and directories. It is a powerful backup application for the Ubuntu system that can be installed directly from the apt command. It supports most of the important features of rsync and provides a user-friendly interface for your Linux desktops. You just need to select the source and destination directory to perform the backup process and then hit the Play button to start the backup.

Original article source at: https://linuxhint.com/

#ubuntu #system #using 

Back Up Ubuntu System Using grsync

A New Markup-based Typesetting System That Is Powerful & Easy to Learn

Typst is a new markup-based typsetting system that is designed to be as powerful as LaTeX while being much easier to learn and use. Typst has:

  • Built-in markup for the most common formatting tasks
  • Flexible functions for everything else
  • A tightly integrated scripting system
  • Math typesetting, bibliography management, and more
  • Fast compile times thanks to incremental compilation
  • Friendly error messages in case something goes wrong

This repository contains the Typst compiler and its CLI, which is everything you need to compile Typst documents locally. For the best writing experience, consider signing up to our collaborative online editor for free. It is currently in public beta.

Example

A gentle introduction to Typst is available in our documentation. However, if you want to see the power of Typst encapsulated in one image, here it is:

Example

Let's dissect what's going on:

We use set rules to configure element properties like the size of pages or the numbering of headings. By setting the page height to auto, it scales to fit the content. Set rules accommodate the most common configurations. If you need full control, you can also use show rules to completely redefine the appearance of an element.

We insert a heading with the = Heading syntax. One equals sign creates a top level heading, two create a subheading and so on. Typst has more lightweight markup like this, see the syntax reference for a full list.

Mathematical equations are enclosed in dollar signs. By adding extra spaces around the contents of a equation, we can put it into a separate block. Multi-letter identifiers are interpreted as Typst definitions and functions unless put into quotes. This way, we don't need backslashes for things like floor and sqrt. And phi.alt applies the alt modifier to the phi to select a particular symbol variant.

Now, we get to some scripting. To input code into a Typst document, we can write a hashtag followed by an expression. We define two variables and a recursive function to compute the n-th fibonacci number. Then, we display the results in a center-aligned table. The table function takes its cells row-by-row. Therefore, we first pass the formulas $F_1$ to $F_10$ and then the computed fibonacci numbers. We apply the spreading operator (..) to both because they are arrays and we want to pass the arrays' items as individual arguments.

Text version of the code example.

#set page(width: 10cm, height: auto)
#set heading(numbering: "1.")

= Fibonacci sequence
The Fibonacci sequence is defined through the
_recurrence relation_ $F_n = F_(n-1) + F_(n-2)$.
It can also be expressed in closed form:

$ F_n = floor(1 / sqrt(5) phi.alt^n), quad
  phi.alt = (1 + sqrt(5)) / 2 $

#let count = 10
#let nums = range(1, count + 1)
#let fib(n) = (
  if n <= 2 { 1 }
  else { fib(n - 1) + fib(n - 2) }
)

The first #count numbers of the sequence are:

#align(center, table(
  columns: count,
  ..nums.map(n => $F_#n$),
  ..nums.map(n => str(fib(n))),
))

Install and use

You can get sources and pre-built binaries for the latest release of Typst from the releases page. This will give you Typst's CLI which converts Typst sources into PDFs.

# Creates `file.pdf` in working directory.
typst file.typ

# Creates PDF file at the desired path.
typst path/to/source.typ path/to/output.pdf

You can also watch source files and automatically recompile on changes. This is faster than compiling from scratch each time because Typst has incremental compilation.

# Watches source files and recompiles on changes.
typst --watch file.typ

If you prefer an integrated IDE-like experience with autocompletion and instant preview, you can also check out the Typst web app, which is currently in public beta.

Build from source

To build Typst yourself, you need to have the latest stable Rust installed. Then, you can build the CLI with the following command:

cargo build -p typst-cli --release

The optimized binary will be stored in target/release/.

Contributing

We would love to see contributions from the community. If you experience bugs, feel free to open an issue or send a PR with a fix. For new features, we would invite you to open an issue first so we can explore the design space together. If you want to contribute and are wondering how everything works, also check out the ARCHITECTURE.md file. It explains how the compiler works.

Design Principles

All of Typst has been designed with three key goals in mind: Power, simplicity, and performance. We think it's time for a system that matches the power of LaTeX, is easy to learn and use, all while being fast enough to realize instant preview. To achieve these goals, we follow three core design principles:

Simplicity through Consistency: If you know how to do one thing in Typst, you should be able to transfer that knowledge to other things. If there are multiple ways to do the same thing, one of them should be at a different level of abstraction than the other. E.g. it's okay that = Introduction and #heading[Introduction] do the same thing because the former is just syntax sugar for the latter.

Power through Composability: There are two ways to make something flexible: Have a knob for everything or have a few knobs that you can combine in many ways. Typst is designed with the second way in mind. We provide systems that you can compose in ways we've never even thought of. TeX is also in the second category, but it's a bit low-level and therefore people use LaTeX instead. But there, we don't really have that much composability. Instead, there's a package for everything (\usepackage{knob}).

Performance through Incrementality: All Typst language features must accommodate for incremental compilation. Luckily we have comemo, a system for incremental compilation which does most of the hard work in the background.


Download Details:

Author: typst
Source Code: https://github.com/typst/typst 
License: Apache-2.0 license

#rust #compiler #system 

A New Markup-based Typesetting System That Is Powerful & Easy to Learn

Reth: Modular, Contributor-friendly & Blazing-fast Implementation

reth 🏗️🚧

Modular, contributor-friendly and blazing-fast implementation of the Ethereum protocol

The project is still work in progress, see the disclaimer below.

What is Reth? What are its goals?

Reth (short for Rust Ethereum, pronunciation) is a new Ethereum full node implementation that is focused on being user-friendly, highly modular, as well as being fast and efficient. Reth is an Execution Layer (EL) and is compatible with all Ethereum Consensus Layer (CL) implementations that support the Engine API. It is originally built and driven forward by Paradigm, and is licensed under the Apache and MIT licenses.

As a full Ethereum node, Reth allows users to connect to the Ethereum network and interact with the Ethereum blockchain. This includes sending and receiving transactions/logs/traces, as well as accessing and interacting with smart contracts. Building a successful Ethereum node requires creating a high-quality implementation that is both secure and efficient, as well as being easy to use on consumer hardware. It also requires building a strong community of contributors who can help support and improve the software.

More concretely, our goals are:

  1. Modularity: Every component of Reth is built to be used as a library: well-tested, heavily documented and benchmarked. We envision that developers will import the node's crates, mix and match, and innovate on top of them. Examples of such usage include but are not limited to spinning up standalone P2P networks, talking directly to a node's database, or "unbundling" the node into the components you need. To achieve that, we are licensing Reth under the Apache/MIT permissive license. You can learn more about the project's components here.
  2. Performance: Reth aims to be fast, so we used Rust and the Erigon staged-sync node architecture. We also use our Ethereum libraries (including ethers-rs and revm) which we’ve battle-tested and optimized via Foundry.
  3. Free for anyone to use any way they want: Reth is free open source software, built for the community, by the community. By licensing the software under the Apache/MIT license, we want developers to use it without being bound by business licenses, or having to think about the implications of GPL-like licenses.
  4. Client Diversity: The Ethereum protocol becomes more antifragile when no node implementation dominates. This ensures that if there's a software bug, the network does not finalize a bad block. By building a new client, we hope to contribute to Ethereum's antifragility.
  5. Support as many EVM chains as possible: We aspire that Reth can full-sync not only Ethereum, but also other chains like Optimism, Polygon, BNB Smart Chain, and more. If you're working on any of these projects, please reach out.
  6. Configurability: We want to solve for node operators that care about fast historical queries, but also for hobbyists who cannot operate on large hardware. We also want to support teams and individuals who want both sync from genesis and via "fast sync". We envision that Reth will be configurable enough and provide configurable "profiles" for the tradeoffs that each team faces.

Status

The project is not ready for use. We hope to have full sync implemented sometime in Q1 2023, followed by optimizations. In the meantime, we're working on making sure every crate of the repository is well documented, abstracted and tested.

For Developers

Running Reth

See the Reth Book for instructions on how to run Reth.

Build & Test

Rust minimum required version to build this project is 1.65.0 published 02.11.2022

Prerequisites:

  • Debian
    • libclang
    • libclang-dev

To fully test Reth, you will need to have Geth installed, but it is possible to run a subset of tests without Geth.

First, clone the repository:

git clone https://github.com/paradigmxyz/reth
cd reth

Next, run the tests:

# Without Geth
cargo test --all

# With Geth
cargo test --all --features geth-tests

We recommend using cargo nextest to speed up testing. With nextest installed, simply substitute cargo test with cargo nextest run.

Contributing

If you want to contribute, or follow along with contributor discussion, you can use our main telegram to chat with us about the development of Reth!

See our contributor docs for more information on the project.

A good starting point is Project Layout which gives an overview of the repository's structure, and descriptions for each package.

Our contributor guidelines can be found in CONTRIBUTING.md.

Getting Help

If you have any questions, first see if the answer to your question can be found in the book.

If the answer is not there:

Security

See SECURITY.md.

Acknowledgements

Reth is a new implementation of the Ethereum protocol. In the process of developing the node we investigated the design decisions other nodes have made to understand what is done well, what is not, and where we can improve the status quo.

None of this would have been possible without them, so big shoutout to the teams below:

  • Geth: We would like to express our heartfelt gratitude to the go-ethereum team for their outstanding contributions to Ethereum over the years. Their tireless efforts and dedication have helped to shape the Ethereum ecosystem and make it the vibrant and innovative community it is today. Thank you for your hard work and commitment to the project.
  • Erigon (fka Turbo-Geth): Erigon pioneered the "Staged Sync" architecture that Reth is using, as well as introduced MDBX as the database of choice. We thank Erigon for pushing the state of the art research on the performance limits of Ethereum nodes.
  • Akula: Reth uses forks of the Apache versions of Akula's MDBX Bindings, FastRLP and ECIES . Given that these packages were already released under the Apache License, and they implement standardized solutions, we decided not to reimplement them to iterate faster. We thank the Akula team for their contributions to the Rust Ethereum ecosystem and for publishing these packages.

🚧 WARNING: UNDER CONSTRUCTION 🚧

This project is work in progress and subject to frequent changes as we are still working on wiring up each individual node component into a full syncing pipeline.

It has not been audited for security purposes and should not be used in production yet.

We will be updating the documentation with the completion status of each component, as well as include more contributing guidelines (design docs, architecture diagrams, repository layouts) and "good first issues". See the "Contributing and Getting Help" section above for more.

We appreciate your patience until we get there. Until then, we are happy to answer all questions in the Telegram link above.


Download Details:

Author: Paradigmxyz
Source Code: https://github.com/paradigmxyz/reth 
License: Apache-2.0, MIT licenses found

#rust #system #modular #ethereum #blockchain 

Reth: Modular, Contributor-friendly & Blazing-fast Implementation

Adam Jason

1679487250

software for hospitals and clinics

With the help of our cutting-edge software solutions, you can provide the best patient care in your clinic or hospital. Obtain the software for hospitals and clinics from the Health 360 - eMedical System to handle everything from appointment scheduling to monitoring medical records.

For More Details, Visit: http://bitly.ws/AHBP

 

#hospitalmanagementsystem #hospitalmanagement #eMS #eMedicalSystem #software #system #hospitals #clinics #HospitalManagementSoftware #emedical #healthcare #medical #emedical

software for hospitals and clinics
Royce  Reinger

Royce Reinger

1679443980

ElasticFusion: Real-time Dense Visual SLAM System

ElasticFusion

Real-time dense visual SLAM system capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera.

Related Publications

Please cite this work if you make use of our system in any of your own endeavors:

1. What do I need to build it?

1.1. Ubuntu

Ubuntu 22.04 on Xorg, NVIDIA drivers 510.73.05, CUDA driver 11.6, CUDA toolkit 11.5 (essentially whatever is in the Ubuntu repos).

sudo apt install -y cmake-qt-gui git build-essential libusb-1.0-0-dev libudev-dev openjdk-11-jdk freeglut3-dev libglew-dev libsuitesparse-dev zlib1g-dev libjpeg-dev
git clone https://github.com/mp3guy/ElasticFusion.git
cd ElasticFusion/
git submodule update --init
cd third-party/OpenNI2/
make -j8
cd ../Pangolin/
mkdir build
cd build
cmake .. -DEIGEN_INCLUDE_DIR=$HOME/ElasticFusion/third-party/Eigen/ -DBUILD_PANGOLIN_PYTHON=false
make -j8
cd ../../..
mkdir build
cd build/
cmake ..

2. How do I use it?

There are two subprojects in the repo:

  • The Core is the main engine which builds into a shared library that you can link into other projects and treat like an API.
  • The Tools where the graphical interface used to run the system on either live sensor data or a logged data file lives.

The executable (ElasticFusion) can take a bunch of parameters when launching it from the command line. They are as follows:

  • -cal : Loads a camera calibration file specified as fx fy cx cy.
  • -l : Processes the specified .klg log file.
  • -p : Loads ground truth poses to use instead of estimated pose.
  • -c : Surfel confidence threshold (default 10).
  • -d : Cutoff distance for depth processing (default 3m).
  • -i : Relative ICP/RGB tracking weight (default 10).
  • -ie : Local loop closure residual threshold (default 5e-05).
  • -ic : Local loop closure inlier threshold (default 35000).
  • -cv : Local loop closure covariance threshold (default 1e-05).
  • -pt : Global loop closure photometric threshold (default 115).
  • -ft : Fern encoding threshold (default 0.3095).
  • -t : Time window length (default 200).
  • -s : Frames to skip at start of log.
  • -e : Cut off frame of log.
  • -f : Flip RGB/BGR.
  • -icl : Enable this if using the ICL-NUIM dataset (flips normals to account for negative focal length on that data).
  • -o : Open loop mode.
  • -rl : Enable relocalisation.
  • -fs : Frame skip if processing a log to simulate real-time.
  • -q : Quit when finished a log.
  • -fo : Fast odometry (single level pyramid).
  • -nso : Disables SO(3) pre-alignment in tracking.
  • -r : Rewind and loop log forever.
  • -ftf : Do frame-to-frame RGB tracking.
  • -sc : Showcase mode (minimal GUI).

Essentially by default ./ElasticFusion will try run off an attached ASUS sensor live. You can provide a .klg log file instead with the -l parameter. You can capture .klg format logs using either Logger1 or Logger2.

3. How do I just use the Core API?

The libefusion.so shared library which gets built by the Core is what you want to link against.

To then use the Core API, make sure to include the header file in your source file:

    #include <ElasticFusion.h>

Initialise the static configuration parameters once somewhere at the start of your program:

    Resolution::getInstance(640, 480);
    Intrinsics::getInstance(528, 528, 320, 240);

Create an OpenGL context before creating an ElasticFusion object, as ElasticFusion uses OpenGL internally. You can do this whatever way you wish, using Pangolin is probably easiest given it's a dependency:

    pangolin::Params windowParams;
    windowParams.Set("SAMPLE_BUFFERS", 0);
    windowParams.Set("SAMPLES", 0);
    pangolin::CreateWindowAndBind("Main", 1280, 800, windowParams);

Make an ElasticFusion object and start using it:

    ElasticFusion eFusion;
    eFusion.processFrame(rgb, depth, timestamp, currentPose, weightMultiplier);

See the source code of MainController.cpp to see more usage.

4. Datasets

We have provided a sample dataset which you can run easily with ElasticFusion for download here. Launch it as follows:

./ElasticFusion -l dyson_lab.klg

5. License

ElasticFusion is freely available for non-commercial use only. Full terms and conditions which govern its use are detailed here and in the LICENSE.txt file.

6. FAQ

What are the hardware requirements?

A very fast nVidia GPU (3.5TFLOPS+), and a fast CPU (something like an i7). If you want to use a non-nVidia GPU you can rewrite the tracking code or substitute it with something else, as the rest of the pipeline is actually written in the OpenGL Shading Language.

How can I get performance statistics?

Download Stopwatch and run StopwatchViewer at the same time as ElasticFusion.

I ran a large dataset and got assert(graph.size() / 16 < MAX_NODES) failed

Currently there's a limit on the number of nodes in the deformation graph down to lazy coding (using a really wide texture instead of a proper 2D one). So we're bound by the maximum dimension of a texture, which is 16384 on modern cards/OpenGL. Either fix the code so this isn't a problem any more, or increase the modulo factor in Core/Shaders/sample.geom.

I have a nice new laptop with a good GPU but it's still slow

If your laptop is running on battery power the GPU will throttle down to save power, so that's unlikely to work (as an aside, Kintinuous will run at 30Hz on a modern laptop on battery power these days). You can try disabling SO(3) pre-alignment, enabling fast odometry, only using either ICP or RGB tracking and not both, running in open loop mode or disabling the tracking pyramid. All of these will cost you accuracy.

I saved a map, how can I view it?

Download Meshlab. Select Render->Shaders->Splatting.

The map keeps getting corrupted - tracking is failing - loop closures are incorrect/not working

Firstly, if you're running live and not processing a log file, ensure you're hitting 30Hz, this is important. Secondly, you cannot move the sensor extremely fast because this violates the assumption behind projective data association. In addition to this, you're probably using a primesense, which means you're suffering from motion blur, unsynchronised cameras and rolling shutter. All of these are aggravated by fast motion and hinder tracking performance.

If you're not getting loop closures and expecting some, pay attention to the inlier and residual graphs in the bottom right, these are an indicator of how close you are to a local loop closure. For global loop closures, you're depending on fern keyframe encoding to save you, which like all appearance-based place recognition methods, has its limitations.

Is there a ROS bridge/node?

No. The system relies on an extremely fast and tight coupling between the mapping and tracking on the GPU, which I don't believe ROS supports natively in terms of message passing.

This doesn't seem to work like it did in the videos/papers

A substantial amount of refactoring was carried out in order to open source this system, including rewriting a lot of functionality to avoid certain licenses and reduce dependencies. Although great care was taken during this process, it is possible that performance regressions were introduced and have not yet been discovered.


Download Details:

Author: mp3guy
Source Code: https://github.com/mp3guy/ElasticFusion 
License: View license

#machinelearning #cpluplus #system #realtime 

ElasticFusion: Real-time Dense Visual SLAM System
Lawson  Wehner

Lawson Wehner

1676589480

How to Require Ext-curl is Missing From Your System Ubuntu

How to Require Ext-curl is Missing From Your System Ubuntu

In this article, we will see require ext-curl * is missing from your system in ubuntu. When we set up the laravel 9 projects in the ubuntu system at that time this error is given. So, we will fix require ext-curl * -> it is missing from your system. Install or enable PHP's curl extension in ubuntu.

So, let's see require ext-curl * is missing from your system ubuntu, composer require ext-curl, composer install curl error, install curl extension php ubuntu, and requires ext-curl error php ubuntu.

When we set up the project using the composer it will show an error like the below image.

composer-ext-curl-missing

This error said to install the PHP-curl extension in your system for running the composer package. So, you can install the PHP-curl extension as per your PHP versions.

Fixed: require ext-curl * is missing from your system ubuntu

So, run the following command.

sudo apt-get install php-curl

Require ext-curl for PHP 8.2

Run the following command for PHP 8.2 version.

sudo apt-get install php8.2-curl

Require ext-curl for PHP 8.1

Run the following command for PHP 8.1 version.

sudo apt-get install php8.1-curl

Require ext-curl for PHP 7.4

Run the following command for PHP 7.4 version.

sudo apt-get install php7.4-curl

Require ext-curl for PHP 7.3

Run the following command for PHP 7.3 version.

sudo apt-get install php7.3-curl

Require ext-curl for PHP 7.2

Run the following command for PHP 7.2 version.

sudo apt-get install php7.2-curl

Original article source at: https://websolutionstuff.com/

#ubuntu #system #require 

How to Require Ext-curl is Missing From Your System Ubuntu
Sheldon  Grant

Sheldon Grant

1676446620

Openpilot: An Open Source Driver Assistance System

Openpilot

An open source driver assistance system. openpilot performs the functions of Automated Lane Centering and Adaptive Cruise Control for over 200 supported car makes and models


What is openpilot?

openpilot is an open source driver assistance system. Currently, openpilot performs the functions of Adaptive Cruise Control (ACC), Automated Lane Centering (ALC), Forward Collision Warning (FCW), and Lane Departure Warning (LDW) for a growing variety of supported car makes, models, and model years. In addition, while openpilot is engaged, a camera-based Driver Monitoring (DM) feature alerts distracted and asleep drivers. See more about the vehicle integration and limitations.

Running on a dedicated device in a car

To use openpilot in a car, you need four things

  • A supported device to run this software: a comma three.
  • This software. The setup procedure of the comma three allows the user to enter a URL for custom software. The URL, openpilot.comma.ai will install the release version of openpilot. To install openpilot master, you can use installer.comma.ai/commaai/master, and replacing commaai with another GitHub username can install a fork.
  • One of the 200+ supported cars. We support Honda, Toyota, Hyundai, Nissan, Kia, Chrysler, Lexus, Acura, Audi, VW, and more. If your car is not supported but has adaptive cruise control and lane-keeping assist, it's likely able to run openpilot.
  • A car harness to connect to your car.

We have detailed instructions for how to mount the device in a car.

Running on PC

All openpilot services can run as usual on a PC without requiring special hardware or a car. You can also run openpilot on recorded or simulated data to develop or experiment with openpilot.

With openpilot's tools, you can plot logs, replay drives, and watch the full-res camera streams. See the tools README for more information.

You can also run openpilot in simulation with the CARLA simulator. This allows openpilot to drive around a virtual car on your Ubuntu machine. The whole setup should only take a few minutes but does require a decent GPU.

A PC running openpilot can also control your vehicle if it is connected to a webcam, a black panda, and a harness.

Community and Contributing

openpilot is developed by comma and by users like you. We welcome both pull requests and issues on GitHub. Bug fixes and new car ports are encouraged. Check out the contributing docs.

Documentation related to openpilot development can be found on docs.comma.ai. Information about running openpilot (e.g. FAQ, fingerprinting, troubleshooting, custom forks, community hardware) should go on the wiki.

You can add support for your car by following guides we have written for Brand and Model ports. Generally, a car with adaptive cruise control and lane keep assist is a good candidate. Join our Discord to discuss car ports: most car makes have a dedicated channel.

Want to get paid to work on openpilot? comma is hiring.

And follow us on Twitter.

User Data and comma Account

By default, openpilot uploads the driving data to our servers. You can also access your data through comma connect. We use your data to train better models and improve openpilot for everyone.

openpilot is open source software: the user is free to disable data collection if they wish to do so.

openpilot logs the road-facing cameras, CAN, GPS, IMU, magnetometer, thermal sensors, crashes, and operating system logs. The driver-facing camera is only logged if you explicitly opt-in in settings. The microphone is not recorded.

By using openpilot, you agree to our Privacy Policy. You understand that use of this software or its related services will generate certain types of user data, which may be logged and stored at the sole discretion of comma. By accepting this agreement, you grant an irrevocable, perpetual, worldwide right to comma for the use of this data.

Safety and Testing

  • openpilot observes ISO26262 guidelines, see SAFETY.md for more details.
  • openpilot has software-in-the-loop tests that run on every commit.
  • The code enforcing the safety model lives in panda and is written in C, see code rigor for more details.
  • panda has software-in-the-loop safety tests.
  • Internally, we have a hardware-in-the-loop Jenkins test suite that builds and unit tests the various processes.
  • panda has additional hardware-in-the-loop tests.
  • We run the latest openpilot in a testing closet containing 10 comma devices continuously replaying routes.

Directory Structure

.
├── cereal              # The messaging spec and libs used for all logs
├── common              # Library like functionality we've developed here
├── docs                # Documentation
├── opendbc             # Files showing how to interpret data from cars
├── panda               # Code used to communicate on CAN
├── third_party         # External libraries
└── system              # Generic services
    ├── camerad         # Driver to capture images from the camera sensors
    ├── clocksd         # Broadcasts current time
    ├── hardware        # Hardware abstraction classes
    ├── logcatd         # systemd journal as a service
    └── proclogd        # Logs information from /proc
└── selfdrive           # Code needed to drive the car
    ├── assets          # Fonts, images, and sounds for UI
    ├── athena          # Allows communication with the app
    ├── boardd          # Daemon to talk to the board
    ├── car             # Car specific code to read states and control actuators
    ├── controls        # Planning and controls
    ├── debug           # Tools to help you debug and do car ports
    ├── locationd       # Precise localization and vehicle parameter estimation
    ├── loggerd         # Logger and uploader of car data
    ├── manager         # Daemon that starts/stops all other daemons as needed
    ├── modeld          # Driving and monitoring model runners
    ├── monitoring      # Daemon to determine driver attention
    ├── navd            # Turn-by-turn navigation
    ├── sensord         # IMU interface code
    ├── test            # Unit tests, system tests, and a car simulator
    └── ui              # The UI

Download Details:

Author: Commaai
Source Code: https://github.com/commaai/openpilot 
License: MIT license

 #python #opensource #driver #system 

Openpilot: An Open Source Driver Assistance System
Monty  Boehm

Monty Boehm

1676372460

How to List Linux Services with Systemctl

How to List Linux Services with Systemctl

In Linux, a service is a program that runs in the background . Services can be started on-demand or at the boot time.

If you are using Linux as your primary operating system or development, platform you will deal with different services such as webserver, ssh or, cron . Knowing how to list running services or check the service status is important when debugging system issues.

Most of the recent Linux distributions are using systemd as the default init system and service manager.

Systemd is a suite of tools for managing Linux systems. It is used to boot up the machine, manage services, automount filesystems, log events, setup hostname, and other system tasks.

This article explains how to list services in Linux.

Listing Linux Services

Systemd uses the concept of units, which can be services, sockets, mount points, devices, etc. Units are defined using text files in ini format. These files include information about the unit, its settings, and commands to execute. The filename extensions define the unit file type. For example, system service unit files have a .service extension.

systemctl is a command-line utility that is used for controlling systemd and managing services. It is part of the systemd ecosystem and is available by default on all systems.

To get a list of all loaded service units, type:

sudo systemctl list-units --type service
UNIT          LOAD      ACTIVE SUB     DESCRIPTION                                                              
cron.service  loaded    active running Regular background program processing daemon 
...

Each line of output contains the following columns from left to right:

  • UNIT - The name of the service unit.
  • LOAD - Information about whether the unit file has been loaded in the memory.
  • ACTIVE - The high-level unit file activation state, which can be active, reloading, inactive, failed, activating, deactivating. It is a generalization of the SUB column.
  • SUB - The low-level unit file activation state. The value of this field depends on the unit type. For example, a unit of type service can be in one of the following states, dead, exited, failed, inactive, or running.
  • DESCRIPTION - Short description of the unit file.

By default, the command lists only the loaded active units. To see loaded but inactive units too, pass the --all option:

sudo systemctl list-units --type service --all

If you want to see all installed unit files, not only the loaded, use:

sudo systemctl list-unit-files

Displaying Service Status

To check the status of a service, use the systemctl status command:

sudo systemctl status <service_name>.service

Where <service_name> is the name of the service unit you want to check. For example to determine the current status of the nginx service you would run:

sudo systemctl status nginx.service

You can omit the suffix “.service”. systemctl status nginx is same as systemctl status nginx.service.

● nginx.service - A high performance web server and a reverse proxy server
     Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2020-12-23 19:13:50 UTC; 5s ago
       Docs: man:nginx(8)
    Process: 3061052 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
    Process: 3061063 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
   Main PID: 3061064 (nginx)
      Tasks: 2 (limit: 470)
     Memory: 6.0M
     CGroup: /system.slice/nginx.service
             ├─3061064 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
             └─3061065 nginx: worker process

Dec 23 19:13:50 linuxize.dev systemd[1]: Starting A high performance web server and a reverse proxy server...

The command will print the following information:

  • Loaded - Whether the service unit has been loaded and the full path to the unit file. It also shows whether the unit is enabled to start on boot time.
  • Active - Whether the service is active and running. If your terminal supports colors and the service is active and running, the dot () and “active (running)” part will be printed in green. The line also shows how long the service is running.
  • Docs - The service documentation.
  • Process - Information about the service processes.
  • Main PID - The service PID.
  • Tasks - The number of tasks accounted for the unit and the tasks limit.
  • Memory - Information about used memory.
  • CGroup - Information about related Control Groups.

If you only want to check the service status, use the systemctl is-active command. For example, to verify that the nginx service is running, you would run:

systemctl is-active nginx.service
active

The command will show you the service status. If the service is active, the command returns an exit status of 0, which can be useful when using the command inside shell scripts.

Conclusion

We have shown you how to use the systemctl command to list Linux services and check their status.

If you have any questions or feedback, feel free to comment below.

Original article source at: https://linuxize.com/

#linux #services #system

How to List Linux Services with Systemctl

How to Fix Python WinError 2 The System Cannot Find The File Specified

How to Fix Python WinError 2 The System Cannot Find The File Specified

When interacting with the Operating System in Python, you may encounter the following error:

FileNotFoundError: [WinError 2] The system cannot find the file specified

This error usually occurs when you try to access a file in Windows Operating System.

One code example that causes this error is when you call the os.rename() method as follows:

import os

os.rename('file.txt', 'output.txt')

The following sections show examples that cause the error and how to fix it.

1. You specify the wrong file name

Suppose you have the following files in your current directory:

.
├── file1.txt
└── main.py

Next, suppose you want to change the file name from file1.txt to output.txt.

You then write the following code in main.py file:

import os

os.rename('file.txt', 'output.txt')

Because there’s no file named file.txt in the current working directory, Python responds with the following error:

Traceback (most recent call last):
  File "main.py", line 3, in <module>
    os.rename('file.txt', 'output.txt')
FileNotFoundError: [WinError 2] The system cannot find the file specified:
 'file.txt' -> 'output.txt'

To fix this error, you need to make sure that you’re specifying the old file name correctly.

In the example above, file.txt needs to be replaced with file1.txt:

import os

os.rename('file1.txt', 'output.txt')  # ✅

With this, the error should disappear and the file renamed successfully.

2. The file is in another directory

When renaming files with os.rename(), it’s possible that the file you want to rename is in a different directory.

Suppose you have the following directory structure:

.
├── assets
│   └── image.png
└── main.py

The image.png file is inside the assets/ folder, so if you run the following code in main.py:

import os

os.rename('image.png', 'photo.png')

You’ll get the same error as before! You need to specify the right relative path from the directory where you run main.py to reach the file you want to rename.

The right code to rename the file is as follows:

import os

os.rename('assets/image.png', 'assets/photo.png')

You need to specify the relative path to the file both in the old_name and new_name arguments for the os.rename() method.

3. You don’t have Administrator access

Sometimes, this error occurs when you try to create a virtual environment with the venv module.

Suppose you run the following command from the Command Prompt:

python venv my_env

The error response is as follows:

Error: [WinError 2] The system cannot find the file specified: 
'C:\Users\sebhastian\documents\my_env'

This error usually happens because the Command Prompt has no permission to create files and folders.

To fix this, open the Command Prompt as Administrator and run the script again.

4. Sublime Text: Can’t find python.exe

If you’re running Python scripts using Sublime Text, you might get this error when Sublime Text can’t find the python.exe file from your PATH environment.

The error is as follows:

[WinError 2] The system cannot find the file specified
[path: C:\Users\Home\AppData\Local\Programs\Python\Python39\;
C:\Users\Home\AppData\Local\Programs\Python\Python39\Scripts\;
C:\Users\Home\AppData\Local\Programs\Python\Python39\]

Open the path shown above in your Windows Explorer. If you can see the python.exe file exists, then the problem might be with the Build System for Python in Sublime Text.

If you use a custom Build System, try this simple configuration first:

{
   "cmd": ["python", "$file"]
}

If it works, then you probably misconfigured the Build System for Python.

For more information on creating Build Systems, please see the Sublime Text Build Systems documentation

Conclusion

You’ve seen four different examples that can cause the [WinError 2] The system cannot find the file specified in Python and how to fix it.

Most likely, this error occurs when a file that’s needed doesn’t exist. It’s possible that the file has been renamed, deleted, or moved from the directory where you expect to find it.

I hope this tutorial is helpful. Thanks for reading! 🙏

Original article source at: https://sebhastian.com/

#python #system 

How to Fix Python WinError 2 The System Cannot Find The File Specified

How to Fix Python [WinError 3] The System Cannot Find The Path Specifi

Python error [WinError 3] is a variation of the [WinError 2] error. The complete error message is as follows:

FileNotFoundError: [WinError 3] The system cannot find the path specified

This error usually occurs when you use the Python os module to interact with the Windows filesystem.

While [WinError 2] means that a file can’t be found, [WinError 3] means that the path you specified doesn’t exist.

This article will show you several examples that can cause this error and how to fix it.

1. Typed the wrong name when calling os.listdir() method

Suppose you have the following directory structure on your computer:

.
├── assets
│   ├── image.png
│   └── photo.png
└── main.py

Next, suppose you try to get the names of the files inside the assets/ directory using the os.listdir() method.

But you specified the wrong directory name when calling the method as follows:

import os

files = os.listdir("asset")
print(files)

Because you have an assets folder and not asset, the os module can’t find the directory:

Traceback (most recent call last):
  File "main.py", line 3, in <module>
    files = os.listdir("asset")
            ^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'asset'

To fix this error, make sure that the path you passed as the parameter to the listdir() method exists.

For our example, replacing asset with assets should fix the error.

2. Specifying non-existent path in os.rename() method.

The error also appears when you specified the wrong path name when calling the os.rename() method.

Suppose you have a different directory structure on your computer as follows:

.
├── docs
│   └── file.txt  
└── main.py

Now, you want to rename the file.txt file into article.txt file.

You called the os.rename() method as follows:

import os

os.rename("doc/file.txt", "article.txt")

The code above incorrectly specified the path docs/ as doc/, so the error is triggered:

Traceback (most recent call last):
  File "main.py", line 3, in <module>
    os.rename("doc/file.txt", "article.txt")
FileNotFoundError: [WinError 3] The system cannot find the path specified: 
'doc/file.txt' -> 'article.txt'

To fix this error, you need to specify the correct path to the file, which is docs/file.txt.

Please note that the extension of the file must be specified in the arguments. If you type file.txt as file, then you’ll get the same error.

This is because Python will think you’re instructing to rename a directory or folder and not a file.

When renaming a file, always include the file extension.

Also, keep in mind that directory and file names are case-sensitive, so you need to use the right capitalization.

3. Use absolute path instead of relative

At times, you might want to access a folder or file that’s a bit difficult to reach from the location of your script.

Suppose you have a directory structure as follows on your computer:

. C:
├── assets
│   └── text
│       └── file.txt
└── scripts
    └── test
        └──  main.py

In this structure, the path to main.py is C:/scripts/test/main.py, while the file.txt is in C:/assets/text/file.txt

Suppose you want to rename file.txt to article.txt, this is how you specify the name with relative paths:

import os

os.rename("../../assets/text/file.txt", "../../assets/text/article.txt")

It’s easy for you to specify the wrong path when using relative paths, so it’s recommended to use absolute paths when the path is complex.

Here’s what the arguments look like using absolute paths:

import os

os.rename("C:/assets/text/file.txt", "C:/assets/text/article.txt")

As you can see, absolute paths are easier to read and understand. In Windows, the absolute path usually starts with the drive letter you have in your system like C: or D:.

To find the absolute path of a file, right-click on the file and select Properties from the context menu.

You’ll see the location of the file as follows:

File location in WindowsFile location in Windows

Add the file name at the end of the location path, and you get the absolute path.

I hope this tutorial is helpful. See you in other articles! 👋

Original article source at: https://sebhastian.com/

#python #system #win #error 

How to Fix Python [WinError 3] The System Cannot Find The Path Specifi

WCS.jl: Astronomical World Coordinate Systems Library for Julia

WCS.jl

Astronomical World Coordinate System library for Julia.

This package wraps the WCSLIB C library.

Example

julia> using WCS

# create a transformation from scratch
julia> wcs = WCSTransform(2;
                          cdelt = [-0.066667, 0.066667],
                          ctype = ["RA---AIR", "DEC--AIR"],
                          crpix = [-234.75, 8.3393],
                          crval = [0., -90],
                          pv    = [(2, 1, 45.0)])
WCSTransform(naxis=2)

# ... or from a FITS header
julia> wcs_array = WCS.from_header(header)

julia> wcs = wcs_array[1]


# convert pixel -> world coordinates
julia> pixcoords = [0.0  24.0  45.0;  # x coordinates
                    0.0  38.0  98.0]  # y coordinates

julia> worldcoords = pix_to_world(wcs, pixcoords)
2x3 Array{Float64,2}:
 267.965   276.539   287.771 
 -73.7366  -71.9741  -69.6781


# convert world -> pixel coordinates
julia> pixcoords = world_to_pix(wcs, worldcoords)
2x3 Array{Float64,2}:
  1.16529e-12  24.0  45.0
 -7.10543e-14  38.0  98.0


# convert a WCSTransform to a FITS header
header = WCS.to_header(wcs)

# check what underlying C library version is being used.
julia> WCS.wcslib_version()
v"5.13.0"

Download Details:

Author: kbarbary
Source Code: https://github.com/kbarbary/WCS.jl 
License: MIT license

#julia #system #library 

WCS.jl: Astronomical World Coordinate Systems Library for Julia
Rupert  Beatty

Rupert Beatty

1673932097

Swift-win32: A Windows Application Framework for Swift

Swift/Win32 - A Swift Application Framework for Windows

Swift/Win32 aims to provide a MVC model for writing applications on Windows. It provides Swift friendly wrapping of the Win32 APIs much like MFC did for C++.

Swift/Win32 Screenshot

Build Requirements

  • Swift 5.4 or newer
  • Windows SDK 10.0.107763 or newer
  • CMake 3.16 or newer

Building

This project requires Swift 5.4 or newer. You can use the the snapshot binaries from swift.org, download the nightly build from Azure, or build the Swift compiler from source.

Recommended (CMake)

The following example session shows how to build with CMake 3.16 or newer.

cmake -B build -D BUILD_SHARED_LIBS=YES -D CMAKE_BUILD_TYPE=Release -D CMAKE_Swift_FLAGS="-sdk %SDKROOT%" -G Ninja -S .
ninja -C build SwiftWin32 UICatalog

%CD%\build\bin\UICatalog.exe

Swift Package Manager

Building this project with swift-package-manager is supported although CMake is recommended for ease. The Swift Package Manager based build is required for code completion via SourceKit-LSP. It also allows for the use of Swift/Win32 in other applications using SPM. In order to use SPM to build this project additional post-build steps are required to use the demo applications.

The following known limitations are known:

  1. It is not possible to deploy auxiliary files which are required for Swift/Win32 based applications to function to the correct location.
  2. It is not possible to build and run multiple demo projects as the auxiliary files collide.
swift build --product UICatalog
mt -nologo -manifest Examples\UICatalog\UICatalog.exe.manifest -outputresource:.build\x86_64-unknown-windows-msvc\debug\UICatalog.exe
copy Examples\UICatalog\Info.plist .build\x86_64-unknown-windows-msvc\debug\
.build\x86_64-unknown-windows-msvc\debug\UICatalog.exe

In order to get access to the manifest tool (mt), the build and testing should occur in a x64 Native Tools Command Prompt for VS2019

Testing

The current implementation is still under flux and many of the interfaces we expect to be present are not yet implemented. Because clearly indicating the missing surface makes it easier to focus on what needs to be accomplished, there are many instances of interfaces being declared but not implemented. Most of these sites will abort if they are reached. In order to enable testing for scenarios which may interct with these cases, a special condition has been added as ENABLE_TESTING to allow us to bypass the missing functionality.

You can run tests by adding that as a flag when invoking the SPM test command as:

swift test -Xswiftc -DENABLE_TESTING

Download Details:

Author: Compnerd
Source Code: https://github.com/compnerd/swift-win32 
License: BSD-3-Clause license

#swift #window  #ui #system #win

Swift-win32: A Windows Application Framework for Swift
Nat  Grady

Nat Grady

1673844120

How to Export Variables in Bash

The Bash variables are quite different from other programming language variables. The variables in bash do not require declaration, simply use the variable name to specify the data of the variable. The user-defined variables in the bash shell are considered the local variables. It implies that the variables of the shell are not passed down to the shell’s child processes. The variables must be exported by the user before they can be utilized by child processes. The “Export” command of bash is used to export the given variables to an environment where all child processes are running inside the shell. The export command is also referred to as the environment variable.

Environmental variables can be exported to child shells by being labeled with the export command. The export command enables us to notify the active session of any modifications made to the exported variable. The export command takes two arguments where the first argument is the different flags of the export command and the second argument is the variable name which is to be set for exporting in the subshell.

Example 1: Usage of the Export Command in Bash.

Here, we have simply used the export command in our bash shell which displayed the following environment variables that are exported in our Linux system.

export


The export command of bash has some flags which provide different functionalities. The following bash shell is deployed with the export command that uses the flag “-p”. The “-p” flag is usually used to list all exported variables of the current shell. When we enter the command “export -p” on the bash shell, the list of all shell exported names are returned as follows:

export -p


All new processes are passed to the system environment variables, as seen in the above image. We can also get rid of the environment variables using another flag which is “-n”. The following command of export is set with the flag “-n” to unset these environment variables. The output displayed that the “export -p” command is undone with the command “export -n”, the variable is limited to the running shell session.

export -n

Example 2: Usage of the Export Command in Bash to Export Variables.

The usage of the export command in the bash shell is explained in the aforementioned section. Now, we are using the export command to export the bash shell variable. The export variable command enables all the commands that are executed in the shell to access the variable. Here, we have used the export command with the variable name “Msg”. The export variable “Msg” is initialized with the string value. Then, we used the echo command that is passed with the “$Msg” variable. The echo command displayed the string value stored in the variable “$Msg”.

export Msg=" Hello World"
echo Msg

 

The environmental variable “Msg” created by using the export command can also be deleted. The unset command is used in the bash shell which is specified with the environment variable “Msg” to remove the value stored inside it. When we echoed the variable “Msg”, it returned the empty output as it was removed by the unset command.

unset Msg
echo $Msg

 

Example 3: Usage of the Export Command in Bash for Exporting Functions or Variables.

The export variable in the bash shell is demonstrated with the running command. We can also use the export command of the bash shell to export the bash function. In the following shell, we have first defined the function name “func()” where we have set the echo command to print the statement “Bash shell script”. Then, on the next line of the shell, we used the export command with the option “-f”. The “-f” option notifies the export command that this is the function. Otherwise, if we use the function without the “-f” option with the export command then the function will be considered a variable. The command “export -f” is given with the function “func()” below. After that, we executed the bash command and then the next line was provided with the name of the function “func” which returned the statement inside it.

func() { echo "Bash shell script";}
export -f func  
bash
func

 

Example 4: Usage of the Export Command in Bash to Export the Variable with the Value.

The export command also enables us to assign a value before exporting the variable. The command is given in the following bash shell where we first declare the variable “x” and provide the numerical value “10” against it. After that, we employed the export command and passed the variable “x” to it. Then, for printing the value of the environment variable “x” in the current shell, we called the printenv method. The printenv command of Linux returned the value “10” of the export variable “x”.

x=10
export x  
printenv x

 

Example 5: Usage of the Export Command in Bash by Using the Function.

Now, we are creating a separate bash file to acquire the expected outcomes from the export command. In the following bash command, we have first invoked the echo keyword which has the export command enclosed with the double quotes. The export command is utilized to set the environment variable “Str” and the “Str” variable is defined with the string statement ‘My new statement’ wrapped in the single quotes. After this, we used the right-angle symbol “>” and specified the file name “bashScript.sh”. Next, we used the bash “source” command which takes the file name “bashScript.sh” to import the export variable into the bash shell without generating a subshell. When we executed the source command, it displayed the export variable “Str” string value in the bash shell which is shown in the following bash shell screen.

echo "export Str='My new statement'" > bashScript.sh
source bashScript.sh

echo $str

 

 Conclusion

The built-in export command of the bash shell is intended to export the environment variables to the subshell. We have illustrated the export command with a few examples. Additionally, we described the export command associated options “-p”, “-n” and “-f”. The variable set with the export command can be reused and acquired with any current shell of bash. We have also given an export variable which is accessed through the bash script.

Original article source at: https://linuxhint.com/

#bash #variables #linux #system 

How to Export Variables in Bash