Brooke  Giles

Brooke Giles

1617767889

TLS Setup in Spring

Learn how to set up TLS in Spring.

Secure communication plays an important role in modern applications. Communication between client and server over plain HTTP is not secure. For a production-ready application, we should enable HTTPS via the TLS (Transport Layer Security) protocol in our application. In this tutorial, we’ll discuss how to enable TLS technology in a Spring Boot application.

#spring-boot #java #programming #developer

What is GEEK

Buddha Community

TLS Setup in Spring

Enhance Amazon Aurora Read/Write Capability with ShardingSphere-JDBC

1. Introduction

Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.

Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.

Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.

1.1 Why sharding is needed

AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.

1.2 Sharding methods

It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.

1. Advantages of vertical sharding

  • Address the coupling of business system and make clearer.
  • Implement hierarchical management, maintenance, monitoring, and expansion to data of different businesses, like micro-service governance.
  • In high concurrency scenarios, vertical sharding removes the bottleneck of IO, database connections, and hardware resources on a single machine to some extent.

2. Disadvantages of vertical sharding

  • After splitting the library, Join can only be implemented by interface aggregation, which will increase the complexity of development.
  • After splitting the library, it is complex to process distributed transactions.
  • There is a large amount of data on a single table and horizontal sharding is required.

3. Advantages of horizontal sharding

  • There is no such performance bottleneck as a large amount of data on a single database and high concurrency, and it increases system stability and load capacity.
  • The business modules do not need to be split due to minor modification on the application client.

4. Disadvantages of horizontal sharding

  • Transaction consistency across shards is hard to be guaranteed;
  • The performance of associated query in cross-library Join is poor.
  • It’s difficult to scale the data many times and maintenance is a big workload.

Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.

ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.

2. ShardingSphere introduction:

The characteristics of Sharding-JDBC are:

  1. With the client end connecting directly to the database, it provides service in the form of jar and requires no extra deployment and dependence.
  2. It can be considered as an enhanced JDBC driver, which is fully compatible with JDBC and all kinds of ORM frameworks.
  3. Applicable in any ORM framework based on JDBC, such as JPA, Hibernate, Mybatis, Spring JDBC Template or direct use of JDBC.
  4. Support any third-party database connection pool, such as DBCP, C3P0, BoneCP, Druid, HikariCP;
  5. Support any kind of JDBC standard database: MySQL, Oracle, SQLServer, PostgreSQL and any databases accessible to JDBC.
  6. Sharding-JDBC adopts decentralized architecture, applicable to high-performance light-weight OLTP application developed with Java

Hybrid Structure Integrating Sharding-JDBC and Applications

Sharding-JDBC’s core concepts

Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.

Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.

Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.

Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.

Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.

3. Testing ShardingSphere-JDBC

3.1 Example project

Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0 version.

git clone https://github.com/apache/shardingsphere-example.git

Project description:

shardingsphere-example
  ├── example-core
  │   ├── config-utility
  │   ├── example-api
  │   ├── example-raw-jdbc
  │   ├── example-spring-jpa #spring+jpa integration-based entity,repository
  │   └── example-spring-mybatis
  ├── sharding-jdbc-example
  │   ├── sharding-example
  │   │   ├── sharding-raw-jdbc-example
  │   │   ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
  │   │   ├── sharding-spring-boot-mybatis-example
  │   │   ├── sharding-spring-namespace-jpa-example
  │   │   └── sharding-spring-namespace-mybatis-example
  │   ├── orchestration-example
  │   │   ├── orchestration-raw-jdbc-example
  │   │   ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
  │   │   └── orchestration-spring-namespace-example
  │   ├── transaction-example
  │   │   ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
  │   │   └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
  │   ├── other-feature-example
  │   │   ├── hint-example
  │   │   └── encrypt-example
  ├── sharding-proxy-example
  │   └── sharding-proxy-boot-mybatis-example
  └── src/resources
        └── manual_schema.sql  

Configuration file description:

application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties       #library split profile only
application-sharding-master-slave.properties    #sharding and read/write splitting profile
application-sharding-tables.properties          #table split profile
application.properties                         #spring boot profile

Code logic description:

The following is the entry class of the Spring Boot application below. Execute it to run the project.

The execution logic of demo is as follows:

3.2 Verifying read/write splitting

As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint to meet users' requirements to write and read with strong consistency, and a read-only endpoint to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint.

Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.

ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.

3.2.1 Setting up the database environment

Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.

3.2.2 Configuring Sharding-JDBC

application.properties spring boot Master profile description:

You need to replace the green ones with your own environment configuration.

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-master-slave.properties sharding-jdbc profile description:

spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true

 

3.2.3 Test and verification process description

  • Test environment data initialization: Spring JPA initialization automatically creates tables for testing.

  • Write data to the master instance

As shown in the ShardingSphere-SQL log figure below, the write SQL is executed on the ds_master data source.

  • Data query operations are performed on the slave library.

As shown in the ShardingSphere-SQL log figure below, the read SQL is executed on the ds_slave data source in the form of polling.

[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_, 
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0 
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id 
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, 
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1 

Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.

@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
    System.out.println("-------------- Process Success Begin ---------------");
    List<Long> orderIds = insertData();
    printData();
    deleteData(orderIds);
    printData();
    System.out.println("-------------- Process Success Finish --------------");
}

3.2.4 Verifying Aurora failover scenario

The Aurora database environment adopts the configuration described in Section 2.2.1.

3.2.4.1 Verification process description

  1. Start the Spring-Boot project

2. Perform a failover on Aurora’s console

3. Execute the Rest API request

4. Repeatedly execute POST (http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.

5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.

3.3 Testing table sharding-only function

3.3.1 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-tables.properties sharding-jdbc profile description

## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.3.2 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address is a broadcast table, create a t_address because there is only one master instance. Two physical tables t_order_0 and t_order_1 will be created when creating t_order.

2. Write operation

As shown in the figure below, Logic SQL inserts a record into t_order. When Sharding-JDBC is executed, data will be distributed to t_order_0 and t_order_1 according to the table splitting rules.

When t_order and t_order_item are bound, the records associated with order_item and order are placed on the same physical table.

3. Read operation

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

The join query operations on order and order_item under the unbound table will traverse all shards.

3.4 Testing database sharding-only function

3.4.1 Setting up the database environment

Create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, tables t_order, t_order_itemt_address will be created on two Aurora instances.

3.4.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0

spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.4.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, physical tables will be created on ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.

3. Read operation

Query order is routed to the corresponding Aurora instance according to the routing rules of the slave library .

Query Address. Since address is a broadcast table, an instance of address will be randomly selected and queried from the nodes used.

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

3.5 Verifying sharding function

3.5.1 Setting up the database environment

As shown in the figure below, create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, physical tables t_order_01, t_order_02, t_order_item_01,and t_order_item_02 and global table t_address will be created on two Aurora instances.

3.5.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.5.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The read operation is similar to the library split function verification described in section2.4.3.

3.6 Testing database sharding, table sharding and read/write splitting function

3.6.1 Setting up the database environment

The following figure shows the physical table of the created database instance.

3.6.2 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave

application-sharding-master-slave.properties sharding-jdbc profile description

The url, name and password of the database need to be changed to your own database parameters.

spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username= 
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username= 
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username= 
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url= 
spring.shardingsphere.datasource.ds_master_1.username= 
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.6.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The join query operations on order and order_item under the binding table are shown below.

3. Conclusion

As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.

Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.

However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.

In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.

Author

Sun Jinhua

A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.

How To Secure A Linux Server

An evolving how-to guide for securing a Linux server that, hopefully, also teaches you a little about security and why it matters.

Introduction

Guide Objective

This guides purpose is to teach you how to secure a Linux server.

There are a lot of things you can do to secure a Linux server and this guide will attempt to cover as many of them as possible. More topics/material will be added as I learn, or as folks contribute.

(Table of Contents)

Why Secure Your Server

I assume you're using this guide because you, hopefully, already understand why good security is important. That is a heavy topic onto itself and breaking it down is out-of-scope for this guide. If you don't know the answer to that question, I advise you research it first.

At a high level, the second a device, like a server, is in the public domain -- i.e visible to the outside world -- it becomes a target for bad-actors. An unsecured device is a playground for bad-actors who want access to your data, or to use your server as another node for their large-scale DDOS attacks.

What's worse is, without good security, you may never know if your server has been compromised. A bad-actor may have gained unauthorized access to your server and copied your data without changing anything so you'd never know. Or your server may have been part of a DDOS attack and you wouldn't know. Look at many of the large scale data breaches in the news -- the companies often did not discover the data leak or intrusion until long after the bad-actors were gone.

Contrary to popular belief, bad-actors don't always want to change something or lock you out of your data for money. Sometimes they just want the data on your server for their data warehouses (there is big money in big data) or to covertly use your server for their nefarious purposes.

(Table of Contents)

Why Yet Another Guide

This guide may appear duplicative/unnecessary because there are countless articles online that tell you how to secure Linux, but the information is spread across different articles, that cover different things, and in different ways. Who has time to scour through hundreds of articles?

As I was going through research for my Debian build, I kept notes. At the end I realized that, along with what I already knew, and what I was learning, I had the makings of a how-to guide. I figured I'd put it online to hopefully help others learn, and save time.

I've never found one guide that covers everything -- this guide is my attempt.

Many of the things covered in this guide may be rather basic/trivial, but most of us do not install Linux every day and it is easy to forget those basic things.

IT automation tools like Ansible, Chef, Jenkins, Puppet, etc. help with the tedious task of installing/configuring a server but IMHO they are better suited for multiple or large scale deployments. IMHO, the overhead required to use those kinds of automation tools is wholly unnecessary for a one-time single server install for home use.

Other Guides

There are many guides provided by experts, industry leaders, and the distributions themselves. It is not practical, and sometimes against copyright, to include everything from those guides. I recommend you check them out before starting with this guide.

(Table of Contents)

To Do / To Add

Guide Overview

About This Guide

This guide...

  • ...is a work in progress.
  • ...is focused on at-home Linux servers. All of the concepts/recommendations here apply to larger/professional environments but those use-cases call for more advanced and specialized configurations that are out-of-scope for this guide.
  • ...does not teach you about Linux, how to install Linux, or how to use it. Check https://linuxjourney.com/ if you're new to Linux.
  • ...is meant to be Linux distribution agnostic.
  • ...does not teach you everything you need to know about security nor does it get into all aspects of system/server security. For example, physical security is out of scope for this guide.
  • ...does not talk about how programs/tools work, nor does it delve into their nook and crannies. Most of the programs/tools this guide references are very powerful and highly configurable. The goal is to cover the bare necessities -- enough to whet your appetite and make you hungry enough to want to go and learn more.
  • ...aims to make it easy by providing code you can copy-and-paste. You might need to modify the commands before you paste so keep your favorite text editor handy.
  • ...is organized in an order that makes logical sense to me -- i.e. securing SSH before installing a firewall. As such, this guide is intended to be followed in the order it is presented but it is not necessary to do so. Just be careful if you do things in a different order -- some sections require previous sections to be completed.

My Use-Case

There are many types of servers and different use-cases. While I want this guide to be as generic as possible, there will be some things that may not apply to all/other use-cases. Use your best judgement when going through this guide.

To help put context to many of the topics covered in this guide, my use-case/configuration is:

  • A desktop class computer...
  • With a single NIC...
  • Connected to a consumer grade router...
  • Getting a dynamic WAN IP provided by the ISP...
  • With WAN+LAN on IPV4...
  • And LAN using NAT...
  • That I want to be able to SSH to remotely from unknown computers and unknown locations (i.e. a friend's house).

Editing Configuration Files - For The Lazy

I am very lazy and do not like to edit files by hand if I don't need to. I also assume everyone else is just like me. :)

So, when and where possible, I have provided code snippets to quickly do what is needed, like add or change a line in a configuration file.

The code snippets use basic commands like echo, cat, sed, awk, and grep. How the code snippets work, like what each command/part does, is out of scope for this guide -- the man pages are your friend.

Note: The code snippets do not validate/verify the change went through -- i.e. the line was actually added or changed. I'll leave the verifying part in your capable hands. The steps in this guide do include taking backups of all files that will be changed.

Not all changes can be automated with code snippets. Those changes need good, old fashioned, manual editing. For example, you can't just append a line to an INI type file. Use your favorite Linux text editor.

Contributing

I wanted to put this guide on GitHub to make it easy to collaborate. The more folks that contribute, the better and more complete this guide will become.

To contribute you can fork and submit a pull request or submit a new issue.

Before You Start

Identify Your Principles

Before you start you will want to identify what your Principles are. What is your threat model? Some things to think about:

  • Why do you want to secure your server?
  • How much security do you want or not want?
  • How much convenience are you willing to compromise for security and vice-versa?
  • What are the threats you want to protect against? What are the specifics to your situation? For example:
    • Is physical access to your server/network a possible attack vector?
    • Will you be opening ports on your router so you can access your server from outside your home?
    • Will you be hosting a file share on your server that will be mounted on a desktop class machine? What is the possibility of the desktop machine getting infected and, in turn, infecting the server?
  • Do you have a means of recovering if your security implementation locks you out of your own server? For example, you disabled root login or password protected GRUB.

These are just a few things to think about. Before you start securing your server you will want to understand what you're trying to protect against and why so you know what you need to do.

Picking A Linux Distribution

This guide is intended to be distribution agnostic so users can use any distribution they want. With that said, there are a few things to keep in mind:

You want a distribution that...

  • ...is stable. Unless you like debugging issues at 2 AM, you don't want an unattended upgrade, or a manual package/system update, to render your server inoperable. But this also means you're okay with not running the latest, greatest, bleeding edge software.
  • ...stays up-to-date with security patches. You can secure everything on your server, but if the core OS or applications you're running have known vulnerabilities, you'll never be safe.
  • ...you're familiar with. If you don't know Linux, I would advise you play around with one before you try to secure it. You should be comfortable with it and know your way around, like how to install software, where configuration files are, etc...
  • ...is well supported. Even the most seasoned admin needs help every now and then. Having a place to go for help will save your sanity.

Installing Linux

Installing Linux is out-of-scope for this guide because each distribution does it differently and the installation instructions are usually well documented. If you need help, start with your distribution's documentation. Regardless of the distribution, the high-level process usually goes like so:

  1. download the ISO
  2. burn/copy/transfer it to your install medium (e.g. a CD or USB stick)
  3. boot your server from your install medium
  4. follow the prompts to install

Where applicable, use the expert install option so you have tighter control of what is running on your server. Only install what you absolutely need. I, personally, do not install anything other than SSH. Also, tick the Disk Encryption option.

Pre/Post Installation Requirements

  • If you're opening ports on your router so you can access your server from the outside, disable the port forwarding until your system is up and secured.
  • Unless you're doing everything physically connected to your server, you'll need remote access so be sure SSH works.
  • Keep your system up-to-date (i.e. sudo apt update && sudo apt upgrade on Debian based systems).
  • Make sure you perform any tasks specific to your setup like:
    • Configuring network
    • Configuring mount points in /etc/fstab
    • Creating the initial user accounts
    • Installing core software you'll want like man
    • Etc...
  • Your server will need to be able to send e-mails so you can get important security alerts. If you're not setting up a mail server check Gmail and Exim4 As MTA With Implicit TLS.
  • I would also recommend you go through the CIS Benchmarks before you start with this guide.

Other Important Notes

  • This guide is being written and tested on Debian. Most things below should work on other distributions. If you find something that does not, please contact me. The main thing that separates each distribution will be its package management system. Since I use Debian, I will provide the appropriate apt commands that should work on all Debian based distributions. If someone is willing to provide the respective commands for other distributions, I will add them.
  • File paths and settings also may differ slightly -- check with your distribution's documentation if you have issues.
  • Read the whole guide before you start. Your use-case and/or principals may call for not doing something or for changing the order.
  • Do not blindly copy-and-paste without understanding what you're pasting. Some commands will need to be modified for your needs before they'll work -- usernames for example.

The SSH Server

Important Note Before You Make SSH Changes

It is highly advised you keep a 2nd terminal open to your server before you make and apply SSH configuration changes. This way if you lock yourself out of your 1st terminal session, you still have one session connected so you can fix it.

Thank you to Sonnenbrand for this idea.

SSH Public/Private Keys

Why

Using SSH public/private keys is more secure than using a password. It also makes it easier and faster, to connect to our server because you don't have to enter a password.

How It Works

Check the references below for more details but, at a high level, public/private keys work by using a pair of keys to verify identity.

  1. One key, the public key, can only encrypt data, not decrypt it
  2. The other key, the private key, can decrypt the data

For SSH, a public and private key is created on the client. You want to keep both keys secure, especially the private key. Even though the public key is meant to be public, it is wise to make sure neither keys fall in the wrong hands.

When you connect to an SSH server, SSH will look for a public key that matches the client you're connecting from in the file ~/.ssh/authorized_keys on the server you're connecting to. Notice the file is in the home folder of the ID you're trying to connect to. So, after creating the public key, you need to append it to ~/.ssh/authorized_keys. One approach is to copy it to a USB stick and physically transfer it to the server. Another approach is to use use ssh-copy-id to transfer and append the public key.

After the keys have been created and the public key has been appended to ~/.ssh/authorized_keys on the host, SSH uses the public and private keys to verify identity and then establish a secure connection. How identity is verified is a complicated process but Digital Ocean has a very nice write-up of how it works. At a high level, identity is verified by the server encrypting a challenge message with the public key, then sending it to the client. If the client cannot decrypt the challenge message with the private key, the identity can't be verified and a connection will not be established.

They are considered more secure because you need the private key to establish an SSH connection. If you set PasswordAuthentication no in /etc/ssh/sshd_config, then SSH won't let you connect without the private key.

You can also set a pass-phrase for the keys which would require you to enter the key pass-phrase when connecting using public/private keys. Keep in mind doing this means you can't use the key for automation because you'll have no way to send the passphrase in your scripts. ssh-agent is a program that is shipped in many Linux distros (and usually already running) that will allow you to hold your unencrypted private key in memory for a configurable duration. Simply run ssh-add and it will prompt you for your passphrase. You will not be prompted for your passphrase again until the configurable duration has passed.

We will be using Ed25519 keys which, according to https://linux-audit.com/:

It is using an elliptic curve signature scheme, which offers better security than ECDSA and DSA. At the same time, it also has good performance.

Goals

  • Ed25519 public/private SSH keys:
    • private key on your client
    • public key on your server

Notes

  • You'll need to do this step for every computer and account you'll be connecting to your server from/as.

References

Steps

From the computer you're going to use to connect to your server, the client, not the server itself, create an Ed25519 key with ssh-keygen:

ssh-keygen -t ed25519
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/user/.ssh/id_ed25519):
Created directory '/home/user/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_ed25519.
Your public key has been saved in /home/user/.ssh/id_ed25519.pub.
The key fingerprint is:
SHA256:F44D4dr2zoHqgj0i2iVIHQ32uk/Lx4P+raayEAQjlcs user@client
The key's randomart image is:
+--[ED25519 256]--+
|xxxx  x          |
|o.o +. .         |
| o o oo   .      |
|. E oo . o .     |
| o o. o S o      |
|... .. o o       |
|.+....+ o        |
|+.=++o.B..       |
|+..=**=o=.       |
+----[SHA256]-----+

Note: If you set a passphrase, you'll need to enter it every time you connect to your server using this key, unless you're using ssh-agent.

Now you need to append the public key ~/.ssh/id_ed25519.pub from your client to the ~/.ssh/authorized_keys file on your server. Since we're presumable still at home on the LAN, we're probably safe from MIM attacks, so we will use ssh-copy-id to transfer and append the public key:

ssh-copy-id user@server
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/user/.ssh/id_ed25519.pub"
The authenticity of host 'host (192.168.1.96)' can't be established.
ECDSA key fingerprint is SHA256:QaDQb/X0XyVlogh87sDXE7MR8YIK7ko4wS5hXjRySJE.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
user@host's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'user@host'"
and check to make sure that only the key(s) you wanted were added.

Now would be a good time to perform any tasks specific to your setup.

Create SSH Group For AllowGroups

Why

To make it easy to control who can SSH to the server. By using a group, we can quickly add/remove accounts to the group to quickly allow or not allow SSH access to the server.

How It Works

We will use the AllowGroups option in SSH's configuration file /etc/ssh/sshd_config to tell the SSH server to only allow users to SSH in if they are a member of a certain UNIX group. Anyone not in the group will not be able to SSH in.

Goals

Notes

References

  • man groupadd
  • man usermod

Steps

Create a group:

sudo groupadd sshusers

Add account(s) to the group:

sudo usermod -a -G sshusers user1
sudo usermod -a -G sshusers user2
sudo usermod -a -G sshusers ...

You'll need to do this for every account on your server that needs SSH access.

Secure /etc/ssh/sshd_config

Why

SSH is a door into your server. This is especially true if you are opening ports on your router so you can SSH to your server from outside your home network. If it is not secured properly, a bad-actor could use it to gain unauthorized access to your system.

How It Works

/etc/ssh/sshd_config is the default configuration file that the SSH server uses. We will use this file to tell what options the SSH server should use.

Goals

  • a secure SSH configuration

Notes

References

Steps

Make a backup of OpenSSH server's configuration file /etc/ssh/sshd_config and remove comments to make it easier to read:

sudo cp --archive /etc/ssh/sshd_config /etc/ssh/sshd_config-COPY-$(date +"%Y%m%d%H%M%S")
sudo sed -i -r -e '/^#|^$/ d' /etc/ssh/sshd_config

Edit /etc/ssh/sshd_config then find and edit or add these settings that should be applied regardless of your configuration/setup:

Note: SSH does not like duplicate contradicting settings. For example, if you have ChallengeResponseAuthentication no and then ChallengeResponseAuthentication yes, SSH will respect the first one and ignore the second. Your /etc/ssh/sshd_config file may already have some of the settings/lines below. To avoid issues you will need to manually go through your /etc/ssh/sshd_config file and address any duplicate contradicting settings.

########################################################################################################
# start settings from https://infosec.mozilla.org/guidelines/openssh#modern-openssh-67 as of 2019-01-01
########################################################################################################

# Supported HostKey algorithms by order of preference.
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key

KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256

Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com

# LogLevel VERBOSE logs user's key fingerprint on login. Needed to have a clear audit track of which key was using to log in.
LogLevel VERBOSE

# Use kernel sandbox mechanisms where possible in unprivileged processes
# Systrace on OpenBSD, Seccomp on Linux, seatbelt on MacOSX/Darwin, rlimit elsewhere.
# Note: This setting is deprecated in OpenSSH 7.5 (https://www.openssh.com/txt/release-7.5)
# UsePrivilegeSeparation sandbox

########################################################################################################
# end settings from https://infosec.mozilla.org/guidelines/openssh#modern-openssh-67 as of 2019-01-01
########################################################################################################

# don't let users set environment variables
PermitUserEnvironment no

# Log sftp level file access (read/write/etc.) that would not be easily logged otherwise.
Subsystem sftp  internal-sftp -f AUTHPRIV -l INFO

# only use the newer, more secure protocol
Protocol 2

# disable X11 forwarding as X11 is very insecure
# you really shouldn't be running X on a server anyway
X11Forwarding no

# disable port forwarding
AllowTcpForwarding no
AllowStreamLocalForwarding no
GatewayPorts no
PermitTunnel no

# don't allow login if the account has an empty password
PermitEmptyPasswords no

# ignore .rhosts and .shosts
IgnoreRhosts yes

# verify hostname matches IP
UseDNS yes

Compression no
TCPKeepAlive no
AllowAgentForwarding no
PermitRootLogin no

# don't allow .rhosts or /etc/hosts.equiv
HostbasedAuthentication no

Then find and edit or add these settings, and set values as per your requirements:

SettingValid ValuesExampleDescriptionNotes
AllowGroupslocal UNIX group nameAllowGroups sshusersgroup to allow SSH access to 
ClientAliveCountMaxnumberClientAliveCountMax 0maximum number of client alive messages sent without response 
ClientAliveIntervalnumber of secondsClientAliveInterval 300timeout in seconds before a response request 
ListenAddressspace separated list of local addresses
  • ListenAddress 0.0.0.0
  • ListenAddress 192.168.1.100
local addresses sshd should listen onSee Issue #1 for important details.
LoginGraceTimenumber of secondsLoginGraceTime 30time in seconds before login times-out 
MaxAuthTriesnumberMaxAuthTries 2maximum allowed attempts to login 
MaxSessionsnumberMaxSessions 2maximum number of open sessions 
MaxStartupsnumberMaxStartups 2maximum number of login sessions 
PasswordAuthenticationyes or noPasswordAuthentication noif login with a password is allowed 
Portany open/available port numberPort 22port that sshd should listen on 

Check man sshd_config for more details what these settings mean.

Make sure there are no duplicate settings that contradict each other. The below command should not have any output.

awk 'NF && $1!~/^(#|HostKey)/{print $1}' /etc/ssh/sshd_config | sort | uniq -c | grep -v ' 1 '

Restart ssh:

sudo service sshd restart

You can check verify the configurations worked with sshd -T and verify the output:

sudo sshd -T
port 22
addressfamily any
listenaddress [::]:22
listenaddress 0.0.0.0:22
usepam yes
logingracetime 30
x11displayoffset 10
maxauthtries 2
maxsessions 2
clientaliveinterval 300
clientalivecountmax 0
streamlocalbindmask 0177
permitrootlogin no
ignorerhosts yes
ignoreuserknownhosts no
hostbasedauthentication no
...
subsystem sftp internal-sftp -f AUTHPRIV -l INFO
maxstartups 2:30:2
permittunnel no
ipqos lowdelay throughput
rekeylimit 0 0
permitopen any

Remove Short Diffie-Hellman Keys

Why

Per Mozilla's OpenSSH guidelines for OpenSSH 6.7+, "all Diffie-Hellman moduli in use should be at least 3072-bit-long".

The Diffie-Hellman algorithm is used by SSH to establish a secure connection. The larger the moduli (key size) the stronger the encryption.

Goals

  • remove all Diffie-Hellman keys that are less than 3072 bits long

References

Steps

Make a backup of SSH's moduli file /etc/ssh/moduli:

sudo cp --archive /etc/ssh/moduli /etc/ssh/moduli-COPY-$(date +"%Y%m%d%H%M%S")

Remove short moduli:

sudo awk '$5 >= 3071' /etc/ssh/moduli | sudo tee /etc/ssh/moduli.tmp
sudo mv /etc/ssh/moduli.tmp /etc/ssh/moduli

2FA/MFA for SSH

Why

Even though SSH is a pretty good security guard for your doors and windows, it is still a visible door that bad-actors can see and try to brute-force in. Fail2ban will monitor for these brute-force attempts but there is no such thing as being too secure. Requiring two factors adds an extra layer of security.

Using Two Factor Authentication (2FA) / Multi Factor Authentication (MFA) requires anyone entering to have two keys to enter which makes it harder for bad actors. The two keys are:

  1. Their password
  2. A 6 digit token that changes every 30 seconds

Without both keys, they won't be able to get in.

Why Not

Many folks might find the experience cumbersome or annoying. And, access to your system is dependent on the accompanying authenticator app that generates the code.

How It Works

On Linux, PAM is responsible for authentication. There are four tasks to PAM that you can read about at https://en.wikipedia.org/wiki/Linux_PAM. This section talks about the authentication task.

When you log into a server, be it directly from the console or via SSH, the door you came through will send the request to the authentication task of PAM and PAM will ask for and verify your password. You can customize the rules each doors use. For example, you could have one set of rules when logging in directly from the console and another set of rules for when logging in via SSH.

This section will alter the authentication rules for when logging in via SSH to require both a password and a 6 digit code.

We will use Google's libpam-google-authenticator PAM module to create and verify a TOTP key. https://fastmail.blog/2016/07/22/how-totp-authenticator-apps-work/ and https://jemurai.com/2018/10/11/how-it-works-totp-based-mfa/ have very good writeups of how TOTP works.

What we will do is tell the server's SSH PAM configuration to ask the user for their password and then their numeric token. PAM will then verify the user's password and, if it is correct, then it will route the authentication request to libpam-google-authenticator which will ask for and verify your 6 digit token. If, and only if, everything is good will the authentication succeed and user be allowed to log in.

Goals

  • 2FA/MFA enabled for all SSH connections

Notes

  • Before you do this, you should have an idea of how 2FA/MFA works and you'll need an authenticator app on your phone to continue.
  • We'll use google-authenticator-libpam.
  • With the below configuration, a user will only need to enter their 2FA/MFA code if they are logging on with their password but not if they are using SSH public/private keys. Check the documentation on how to change this behavior to suite your requirements.

References

Steps

Install it libpam-google-authenticator.

On Debian based systems:

sudo apt install libpam-google-authenticator

Make sure you're logged in as the ID you want to enable 2FA/MFA for and execute google-authenticator to create the necessary token data:

google-authenticator
Do you want authentication tokens to be time-based (y/n) y
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/user@host%3Fsecret%3DR4ZWX34FQKZROVX7AGLJ64684Y%26issuer%3Dhost

...

Your new secret key is: R3NVX3FFQKZROVX7AGLJUGGESY
Your verification code is 751419
Your emergency scratch codes are:
  12345678
  90123456
  78901234
  56789012
  34567890

Do you want me to update your "/home/user/.google_authenticator" file (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds. In order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with
poor time synchronization, you can increase the window from its default
size of +-1min (window size of 3) to about +-4min (window size of
17 acceptable tokens).
Do you want to do so? (y/n) y

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

Notice this is not run as root.

Select default option (y in most cases) for all the questions it asks and remember to save the emergency scratch codes.

Make a backup of PAM's SSH configuration file /etc/pam.d/sshd:

sudo cp --archive /etc/pam.d/sshd /etc/pam.d/sshd-COPY-$(date +"%Y%m%d%H%M%S")

Now we need to enable it as an authentication method for SSH by adding this line to /etc/pam.d/sshd:

auth       required     pam_google_authenticator.so nullok

Note: Check here for what nullok means.

For the lazy:

echo -e "\nauth       required     pam_google_authenticator.so nullok         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/pam.d/sshd

Tell SSH to leverage it by adding or editing this line in /etc/ssh/sshd_config:

ChallengeResponseAuthentication yes

For the lazy:

sudo sed -i -r -e "s/^(challengeresponseauthentication .*)$/# \1         # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/I" /etc/ssh/sshd_config
echo -e "\nChallengeResponseAuthentication yes         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/ssh/sshd_config

Restart ssh:

sudo service sshd restart

The Basics

Limit Who Can Use sudo

Why

sudo lets accounts run commands as other accounts, including root. We want to make sure that only the accounts we want can use sudo.

Goals

  • sudo privileges limited to those who are in a group we specify

Notes

Steps

Create a group:

sudo groupadd sudousers

Add account(s) to the group:

sudo usermod -a -G sudousers user1
sudo usermod -a -G sudousers user2
sudo usermod -a -G sudousers  ...

You'll need to do this for every account on your server that needs sudo privileges.

Make a backup of the sudo's configuration file /etc/sudoers:

sudo cp --archive /etc/sudoers /etc/sudoers-COPY-$(date +"%Y%m%d%H%M%S")

Edit sudo's configuration file /etc/sudoers:

sudo visudo

Tell sudo to only allow users in the sudousers group to use sudo by adding this line if it is not already there:

%sudousers   ALL=(ALL:ALL) ALL

Limit Who Can Use su

Why

su also lets accounts run commands as other accounts, including root. We want to make sure that only the accounts we want can use su.

Goals

  • su privileges limited to those who are in a group we specify

References

Steps

Create a group:

sudo groupadd suusers

Add account(s) to the group:

sudo usermod -a -G suusers user1
sudo usermod -a -G suusers user2
sudo usermod -a -G suusers  ...

You'll need to do this for every account on your server that needs sudo privileges.

Make it so only users in this group can execute /bin/su:

sudo dpkg-statoverride --update --add root suusers 4750 /bin/su

Run applications in a sandbox with FireJail

Why

It's absolutely better, for many applications, to run in a sandbox.

Browsers (even more the Closed Source ones) and eMail Clients are highly suggested.

Goals

  • confine applications in a jail (few safe directories) and block access to the rest of the system

References

Steps

Install the software:

sudo apt install firejail firejail-profiles

Note: for Debian 10 Stable, official Backport is suggested:

sudo apt install -t buster-backports firejail firejail-profiles

Allow an application (installed in /usr/bin or /bin) to run only in a sandbox (see few examples below here):

sudo ln -s /usr/bin/firejail /usr/local/bin/google-chrome-stable
sudo ln -s /usr/bin/firejail /usr/local/bin/firefox
sudo ln -s /usr/bin/firejail /usr/local/bin/chromium
sudo ln -s /usr/bin/firejail /usr/local/bin/evolution
sudo ln -s /usr/bin/firejail /usr/local/bin/thunderbird

Run the application as usual (via terminal or launcher) and check if is running in a jail:

firejail --list

Allow a sandboxed app to run again as it was before (example: firefox)

sudo rm /usr/local/bin/firefox

NTP Client

Why

Many security protocols leverage the time. If your system time is incorrect, it could have negative impacts to your server. An NTP client can solve that problem by keeping your system time in-sync with global NTP servers

How It Works

NTP stands for Network Time Protocol. In the context of this guide, an NTP client on the server is used to update the server time with the official time pulled from official servers. Check https://www.pool.ntp.org/en/ for all of the public NTP servers.

Goals

  • NTP client installed and keeping server time in-sync

References

Steps

Install ntp.

On Debian based systems:

sudo apt install ntp

Make a backup of the NTP client's configuration file /etc/ntp.conf:

sudo cp --archive /etc/ntp.conf /etc/ntp.conf-COPY-$(date +"%Y%m%d%H%M%S")

The default configuration, at least on Debian, is already pretty secure. The only thing we'll want to make sure is we're the pool directive and not any server directives. The pool directive allows the NTP client to stop using a server if it is unresponsive or serving bad time. Do this by commenting out all server directives and adding the below to /etc/ntp.conf.

pool pool.ntp.org iburst

For the lazy:

sudo sed -i -r -e "s/^((server|pool).*)/# \1         # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/" /etc/ntp.conf
echo -e "\npool pool.ntp.org iburst         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/ntp.conf

Example /etc/ntp.conf:

driftfile /var/lib/ntp/ntp.drift
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
restrict -4 default kod notrap nomodify nopeer noquery limited
restrict -6 default kod notrap nomodify nopeer noquery limited
restrict 127.0.0.1
restrict ::1
restrict source notrap nomodify noquery
pool pool.ntp.org iburst         # added by user on 2019-03-09 @ 10:23:35

Restart ntp:

sudo service ntp restart

Check the status of the ntp service:

sudo systemctl status ntp
● ntp.service - LSB: Start NTP daemon
   Loaded: loaded (/etc/init.d/ntp; generated; vendor preset: enabled)
   Active: active (running) since Sat 2019-03-09 15:19:46 EST; 4s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 1016 ExecStop=/etc/init.d/ntp stop (code=exited, status=0/SUCCESS)
  Process: 1028 ExecStart=/etc/init.d/ntp start (code=exited, status=0/SUCCESS)
    Tasks: 2 (limit: 4915)
   CGroup: /system.slice/ntp.service
           └─1038 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 108:113

Mar 09 15:19:46 host ntpd[1038]: Listen and drop on 0 v6wildcard [::]:123
Mar 09 15:19:46 host ntpd[1038]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Mar 09 15:19:46 host ntpd[1038]: Listen normally on 2 lo 127.0.0.1:123
Mar 09 15:19:46 host ntpd[1038]: Listen normally on 3 enp0s3 10.10.20.96:123
Mar 09 15:19:46 host ntpd[1038]: Listen normally on 4 lo [::1]:123
Mar 09 15:19:46 host ntpd[1038]: Listen normally on 5 enp0s3 [fe80::a00:27ff:feb6:ed8e%2]:123
Mar 09 15:19:46 host ntpd[1038]: Listening on routing socket on fd #22 for interface updates
Mar 09 15:19:47 host ntpd[1038]: Soliciting pool server 108.61.56.35
Mar 09 15:19:48 host ntpd[1038]: Soliciting pool server 69.89.207.199
Mar 09 15:19:49 host ntpd[1038]: Soliciting pool server 45.79.111.114

Check ntp's status:

sudo ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 pool.ntp.org    .POOL.          16 p    -   64    0    0.000    0.000   0.000
*lithium.constan 198.30.92.2      2 u    -   64    1   19.900    4.894   3.951
 ntp2.wiktel.com 212.215.1.157    2 u    2   64    1   48.061   -0.431   0.104

Securing /proc

Why

To quote https://linux-audit.com/linux-system-hardening-adding-hidepid-to-proc/:

When looking in /proc you will discover a lot of files and directories. Many of them are just numbers, which represent the information about a particular process ID (PID). By default, Linux systems are deployed to allow all local users to see this all information. This includes process information from other users. This could include sensitive details that you may not want to share with other users. By applying some filesystem configuration tweaks, we can change this behavior and improve the security of the system.

Note: This may break on some systemd systems. Please see https://github.com/imthenachoman/How-To-Secure-A-Linux-Server/issues/37 for more information. Thanks to nlgranger for sharing.

Goals

  • /proc mounted with hidepid=2 so users can only see information about their processes

References

Steps

Make a backup of /etc/fstab:

sudo cp --archive /etc/fstab /etc/fstab-COPY-$(date +"%Y%m%d%H%M%S")

Add this line to /etc/fstab to have /proc mounted with hidepid=2:

proc     /proc     proc     defaults,hidepid=2     0     0

For the lazy:

echo -e "\nproc     /proc     proc     defaults,hidepid=2     0     0         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/fstab

Reboot the system:

sudo reboot now

Note: Alternatively, you can remount /proc without rebooting with sudo mount -o remount,hidepid=2 /proc

Force Accounts To Use Secure Passwords

Why

By default, accounts can use any password they want, including bad ones. pwquality/pam_pwquality addresses this security gap by providing "a way to configure the default password quality requirements for the system passwords" and checking "its strength against a system dictionary and a set of rules for identifying poor choices."

How It Works

On Linux, PAM is responsible for authentication. There are four tasks to PAM that you can read about at https://en.wikipedia.org/wiki/Linux_PAM. This section talks about the password task.

When there is a need to set or change an account password, the password task of PAM handles the request. In this section we will tell PAM's password task to pass the requested new password to libpam-pwquality to make sure it meets our requirements. If the requirements are met it is used/set; if it does not meet the requirements it errors and lets the user know.

Goals

  • enforced strong passwords

Steps

Install libpam-pwquality.

On Debian based systems:

sudo apt install libpam-pwquality

Make a backup of PAM's password configuration file /etc/pam.d/common-password:

sudo cp --archive /etc/pam.d/common-password /etc/pam.d/common-password-COPY-$(date +"%Y%m%d%H%M%S")

Tell PAM to use libpam-pwquality to enforce strong passwords by editing the file /etc/pam.d/common-password and change the line that starts like this:

password        requisite                       pam_pwquality.so

to this:

password        requisite                       pam_pwquality.so retry=3 minlen=10 difok=3 ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1 maxrepeat=3 gecoschec

The above options are:

  • retry=3 = prompt user 3 times before returning with error.
  • minlen=10 = the minimum length of the password, factoring in any credits (or debits) from these:
    • dcredit=-1 = must have at least one digit
    • ucredit=-1 = must have at least one upper case letter
    • lcredit=-1 = must have at least one lower case letter
    • ocredit=-1 = must have at least one non-alphanumeric character
  • difok=3 = at least 3 characters from the new password cannot have been in the old password
  • maxrepeat=3 = allow a maximum of 3 repeated characters
  • gecoschec = do not allow passwords with the account's name
sudo sed -i -r -e "s/^(password\s+requisite\s+pam_pwquality.so)(.*)$/# \1\2         # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n\1 retry=3 minlen=10 difok=3 ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1 maxrepeat=3 gecoschec         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/" /etc/pam.d/common-password

Automatic Security Updates and Alerts

Why

It is important to keep a server updated with the latest critical security patches and updates. Otherwise you're at risk of known security vulnerabilities that bad-actors could use to gain unauthorized access to your server.

Unless you plan on checking your server every day, you'll want a way to automatically update the system and/or get emails about available updates.

You don't want to do all updates because with every update there is a risk of something breaking. It is important to do the critical updates but everything else can wait until you have time to do it manually.

Why Not

Automatic and unattended updates may break your system and you may not be near your server to fix it. This would be especially problematic if it broke your SSH access.

Notes

  • Each distribution manages packages and updates differently. So far I only have steps for Debian based systems.
  • Your server will need a way to send e-mails for this to work

Goals

  • Automatic, unattended, updates of critical security patches
  • Automatic emails of remaining pending updates

Debian Based Systems

How It Works

On Debian based systems you can use:

  • unattended-upgrades to automatically do system updates you want (i.e. critical security updates)
  • apt-listchanges to get details about package changes before they are installed/upgraded
  • apticron to get emails for pending package updates

We will use unattended-upgrades to apply critical security patches. We can also apply stable updates since they've already been thoroughly tested by the Debian community.

References

Steps

Install unattended-upgrades, apt-listchanges, and apticron:

sudo apt install unattended-upgrades apt-listchanges apticron

Now we need to configure unattended-upgrades to automatically apply the updates. This is typically done by editing the files /etc/apt/apt.conf.d/20auto-upgrades and /etc/apt/apt.conf.d/50unattended-upgrades that were created by the packages. However, because these file may get overwritten with a future update, we'll create a new file instead. Create the file /etc/apt/apt.conf.d/51myunattended-upgrades and add this:

// Enable the update/upgrade script (0=disable)
APT::Periodic::Enable "1";

// Do "apt-get update" automatically every n-days (0=disable)
APT::Periodic::Update-Package-Lists "1";

// Do "apt-get upgrade --download-only" every n-days (0=disable)
APT::Periodic::Download-Upgradeable-Packages "1";

// Do "apt-get autoclean" every n-days (0=disable)
APT::Periodic::AutocleanInterval "7";

// Send report mail to root
//     0:  no report             (or null string)
//     1:  progress report       (actually any string)
//     2:  + command outputs     (remove -qq, remove 2>/dev/null, add -d)
//     3:  + trace on    APT::Periodic::Verbose "2";
APT::Periodic::Unattended-Upgrade "1";

// Automatically upgrade packages from these
Unattended-Upgrade::Origins-Pattern {
      "o=Debian,a=stable";
      "o=Debian,a=stable-updates";
      "origin=Debian,codename=${distro_codename},label=Debian-Security";
};

// You can specify your own packages to NOT automatically upgrade here
Unattended-Upgrade::Package-Blacklist {
};

// Run dpkg --force-confold --configure -a if a unclean dpkg state is detected to true to ensure that updates get installed even when the system got interrupted during a previous run
Unattended-Upgrade::AutoFixInterruptedDpkg "true";

//Perform the upgrade when the machine is running because we wont be shutting our server down often
Unattended-Upgrade::InstallOnShutdown "false";

// Send an email to this address with information about the packages upgraded.
Unattended-Upgrade::Mail "root";

// Always send an e-mail
Unattended-Upgrade::MailOnlyOnError "false";

// Remove all unused dependencies after the upgrade has finished
Unattended-Upgrade::Remove-Unused-Dependencies "true";

// Remove any new unused dependencies after the upgrade has finished
Unattended-Upgrade::Remove-New-Unused-Dependencies "true";

// Automatically reboot WITHOUT CONFIRMATION if the file /var/run/reboot-required is found after the upgrade.
Unattended-Upgrade::Automatic-Reboot "true";

// Automatically reboot even if users are logged in.
Unattended-Upgrade::Automatic-Reboot-WithUsers "true";

Notes:

Run a dry-run of unattended-upgrades to make sure your configuration file is okay:

sudo unattended-upgrade -d --dry-run

If everything is okay, you can let it run whenever it's scheduled to or force a run with unattended-upgrade -d.

Configure apt-listchanges to your liking:

sudo dpkg-reconfigure apt-listchanges

For apticron, the default settings are good enough but you can check them in /etc/apticron/apticron.conf if you want to change them. For example, my configuration looks like this:

EMAIL="root"
NOTIFY_NO_UPDATES="1"

More Secure Random Entropy Pool (WIP)

Why

WIP

How It Works

WIP

Goals

WIP

References

Steps

Install rng-tools.

On Debian based systems:

sudo apt-get install rng-tools

Now we need to set the hardware device used to generate random numbers by adding this to /etc/default/rng-tools:

HRNGDEVICE=/dev/urandom

For the lazy:

echo "HRNGDEVICE=/dev/urandom" | sudo tee -a /etc/default/rng-tools

Restart the service:

sudo systemctl stop rng-tools.service
sudo systemctl start rng-tools.service

Test randomness:

The Network

Firewall With UFW (Uncomplicated Firewall)

Why

Call me paranoid, and you don't have to agree, but I want to deny all traffic in and out of my server except what I explicitly allow. Why would my server be sending traffic out that I don't know about? And why would external traffic be trying to access my server if I don't know who or what it is? When it comes to good security, my opinion is to reject/deny by default, and allow by exception.

Of course, if you disagree, that is totally fine and can configure UFW to suit your needs.

Either way, ensuring that only traffic we explicitly allow is the job of a firewall.

How It Works

The Linux kernel provides capabilities to monitor and control network traffic. These capabilities are exposed to the end-user through firewall utilities. On Linux, the most common firewall is iptables. However, iptables is rather complicated and confusing (IMHO). This is where UFW comes in. Think of UFW as a front-end to iptables. It simplifies the process of managing the iptables rules that tell the Linux kernel what to do with network traffic.

UFW works by letting you configure rules that:

  • allow or deny
  • input or output traffic
  • to or from ports

You can create rules by explicitly specifying the ports or with application configurations that specify the ports.

Goals

  • all network traffic, input and output, blocked except those we explicitly allow

Notes

  • As you install other programs, you'll need to enable the necessary ports/applications.

References

Steps

Install ufw.

On Debian based systems:

sudo apt install ufw

Deny all outgoing traffic:

sudo ufw default deny outgoing comment 'deny all outgoing traffic'
Default outgoing policy changed to 'deny'
(be sure to update your rules accordingly)

If you are not as paranoid as me, and don't want to deny all outgoing traffic, you can allow it instead:

sudo ufw default allow outgoing comment 'allow all outgoing traffic'

Deny all incoming traffic:

sudo ufw default deny incoming comment 'deny all incoming traffic'

Obviously we want SSH connections in:

sudo ufw limit in ssh comment 'allow SSH connections in'
Rules updated
Rules updated (v6)

Allow additional traffic as per your needs. Some common use-cases:

# allow traffic out on port 53 -- DNS
sudo ufw allow out 53 comment 'allow DNS calls out'

# allow traffic out on port 123 -- NTP
sudo ufw allow out 123 comment 'allow NTP out'

# allow traffic out for HTTP, HTTPS, or FTP
# apt might needs these depending on which sources you're using
sudo ufw allow out http comment 'allow HTTP traffic out'
sudo ufw allow out https comment 'allow HTTPS traffic out'
sudo ufw allow out ftp comment 'allow FTP traffic out'

# allow whois
sudo ufw allow out whois comment 'allow whois'

# allow traffic out on port 68 -- the DHCP client
# you only need this if you're using DHCP
sudo ufw allow out 67 comment 'allow the DHCP client to update'
sudo ufw allow out 68 comment 'allow the DHCP client to update'

Start ufw:

sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup

If you want to see a status:

sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
22/tcp                     LIMIT       Anywhere                   # allow SSH connections in
22/tcp (v6)                LIMIT       Anywhere (v6)              # allow SSH connections in

53                         ALLOW OUT   Anywhere                   # allow DNS calls out
123                        ALLOW OUT   Anywhere                   # allow NTP out
80/tcp                     ALLOW OUT   Anywhere                   # allow HTTP traffic out
443/tcp                    ALLOW OUT   Anywhere                   # allow HTTPS traffic out
21/tcp                     ALLOW OUT   Anywhere                   # allow FTP traffic out
Mail submission            ALLOW OUT   Anywhere                   # allow mail out
43/tcp                     ALLOW OUT   Anywhere                   # allow whois
53 (v6)                    ALLOW OUT   Anywhere (v6)              # allow DNS calls out
123 (v6)                   ALLOW OUT   Anywhere (v6)              # allow NTP out
80/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow HTTP traffic out
443/tcp (v6)               ALLOW OUT   Anywhere (v6)              # allow HTTPS traffic out
21/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow FTP traffic out
Mail submission (v6)       ALLOW OUT   Anywhere (v6)              # allow mail out
43/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow whois

or

sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), deny (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     LIMIT IN    Anywhere                   # allow SSH connections in
22/tcp (v6)                LIMIT IN    Anywhere (v6)              # allow SSH connections in

53                         ALLOW OUT   Anywhere                   # allow DNS calls out
123                        ALLOW OUT   Anywhere                   # allow NTP out
80/tcp                     ALLOW OUT   Anywhere                   # allow HTTP traffic out
443/tcp                    ALLOW OUT   Anywhere                   # allow HTTPS traffic out
21/tcp                     ALLOW OUT   Anywhere                   # allow FTP traffic out
587/tcp (Mail submission)  ALLOW OUT   Anywhere                   # allow mail out
43/tcp                     ALLOW OUT   Anywhere                   # allow whois
53 (v6)                    ALLOW OUT   Anywhere (v6)              # allow DNS calls out
123 (v6)                   ALLOW OUT   Anywhere (v6)              # allow NTP out
80/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow HTTP traffic out
443/tcp (v6)               ALLOW OUT   Anywhere (v6)              # allow HTTPS traffic out
21/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow FTP traffic out
587/tcp (Mail submission (v6)) ALLOW OUT   Anywhere (v6)              # allow mail out
43/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow whois

Default Applications

ufw ships with some default applications. You can see them with:

sudo ufw app list
Available applications:
  AIM
  Bonjour
  CIFS
  DNS
  Deluge
  IMAP
  IMAPS
  IPP
  KTorrent
  Kerberos Admin
  Kerberos Full
  Kerberos KDC
  Kerberos Password
  LDAP
  LDAPS
  LPD
  MSN
  MSN SSL
  Mail submission
  NFS
  OpenSSH
  POP3
  POP3S
  PeopleNearby
  SMTP
  SSH
  Socks
  Telnet
  Transmission
  Transparent Proxy
  VNC
  WWW
  WWW Cache
  WWW Full
  WWW Secure
  XMPP
  Yahoo
  qBittorrent
  svnserve

To get details about the app, like which ports it includes, type:

sudo ufw app info [app name]
sudo ufw app info DNS
Profile: DNS
Title: Internet Domain Name Server
Description: Internet Domain Name Server

Port:
  53

Custom Application

If you don't want to create rules by explicitly providing the port number(s), you can create your own application configurations. To do this, create a file in /etc/ufw/applications.d.

For example, here is what you would use for Plex:

cat /etc/ufw/applications.d/plexmediaserver
[PlexMediaServer]
title=Plex Media Server
description=This opens up PlexMediaServer for http (32400), upnp, and autodiscovery.
ports=32469/tcp|32413/udp|1900/udp|32400/tcp|32412/udp|32410/udp|32414/udp|32400/udp

Then you can enable it like any other app:

sudo ufw allow plexmediaserver

iptables Intrusion Detection And Prevention with PSAD

Why

Even if you have a firewall to guard your doors, it is possible to try brute-forcing your way in any of the guarded doors. We want to monitor all network activity to detect potential intrusion attempts, such has repeated attempts to get in, and block them.

How It Works

I can't explain it any better than user FINESEC from https://serverfault.com/ did at: https://serverfault.com/a/447604/289829.

Fail2BAN scans log files of various applications such as apache, ssh or ftp and automatically bans IPs that show the malicious signs such as automated login attempts. PSAD on the other hand scans iptables and ip6tables log messages (typically /var/log/messages) to detect and optionally block scans and other types of suspect traffic such as DDoS or OS fingerprinting attempts. It's ok to use both programs at the same time because they operate on different level.

And, since we're already using UFW so we'll follow the awesome instructions by netson at https://gist.github.com/netson/c45b2dc4e835761fbccc to make PSAD work with UFW.

References

Steps

Install psad.

On Debian based systems:

sudo apt install psad

Make a backup of psad's configuration file /etc/psad/psad.conf:

sudo cp --archive /etc/psad/psad.conf /etc/psad/psad.conf-COPY-$(date +"%Y%m%d%H%M%S")

Review and update configuration options in /etc/psad/psad.conf. Pay special attention to these:

SettingSet To
EMAIL_ADDRESSESyour email address(s)
HOSTNAMEyour server's hostname
ENABLE_PSADWATCHDENABLE_PSADWATCHD Y;
ENABLE_AUTO_IDSENABLE_AUTO_IDS Y;
ENABLE_AUTO_IDS_EMAILSENABLE_AUTO_IDS_EMAILS Y;
EXPECT_TCP_OPTIONSEXPECT_TCP_OPTIONS Y;

Check the configuration file psad's documentation at http://www.cipherdyne.org/psad/docs/config.html for more details.

Now we need to make some changes to ufw so it works with psad by telling ufw to log all traffic so psad can analyze it. Do this by editing two files and adding these lines at the end but before the COMMIT line.

Make backups:

sudo cp --archive /etc/ufw/before.rules /etc/ufw/before.rules-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /etc/ufw/before6.rules /etc/ufw/before6.rules-COPY-$(date +"%Y%m%d%H%M%S")

Edit the files:

  • /etc/ufw/before.rules
  • /etc/ufw/before6.rules

Now we need to reload/restart ufw and psad for the changes to take effect:

sudo ufw reload

sudo psad -R
sudo psad --sig-update
sudo psad -H

Analyze iptables rules for errors:

sudo psad --fw-analyze
[+] Parsing INPUT chain rules.
[+] Parsing INPUT chain rules.
[+] Firewall config looks good.
[+] Completed check of firewall ruleset.
[+] Results in /var/log/psad/fw_check
[+] Exiting.

Note: If there were any issues you will get an e-mail with the error.

Check the status of psad:

sudo psad --Status
[-] psad: pid file /var/run/psad/psadwatchd.pid does not exist for psadwatchd on vm
[+] psad_fw_read (pid: 3444)  %CPU: 0.0  %MEM: 2.2
    Running since: Sat Feb 16 01:03:09 2019

[+] psad (pid: 3435)  %CPU: 0.2  %MEM: 2.7
    Running since: Sat Feb 16 01:03:09 2019
    Command line arguments: [none specified]
    Alert email address(es): root@localhost

[+] Version: psad v2.4.3

[+] Top 50 signature matches:
        [NONE]

[+] Top 25 attackers:
        [NONE]

[+] Top 20 scanned ports:
        [NONE]

[+] iptables log prefix counters:
        [NONE]

    Total protocol packet counters:

[+] IP Status Detail:
        [NONE]

    Total scan sources: 0
    Total scan destinations: 0

[+] These results are available in: /var/log/psad/status.out

Application Intrusion Detection And Prevention With Fail2Ban

Why

UFW tells your server what doors to board up so nobody can see them, and what doors to allow authorized users through. PSAD monitors network activity to detect and prevent potential intrusions -- repeated attempts to get in.

But what about the applications/services your server is running, like SSH and Apache, where your firewall is configured to allow access in. Even though access may be allowed that doesn't mean all access attempts are valid and harmless. What if someone tries to brute-force their way in to a web-app you're running on your server? This is where Fail2ban comes in.

How It Works

Fail2ban monitors the logs of your applications (like SSH and Apache) to detect and prevent potential intrusions. It will monitor network traffic/logs and prevent intrusions by blocking suspicious activity (e.g. multiple successive failed connections in a short time-span).

Goals

  • network monitoring for suspicious activity with automatic banning of offending IPs

Notes

  • As of right now, the only thing running on this server is SSH so we'll want Fail2ban to monitor SSH and ban as necessary.
  • As you install other programs, you'll need to create/configure the appropriate jails and enable them.

References

Steps

Install fail2ban.

On Debian based systems:

sudo apt install fail2ban

We don't want to edit /etc/fail2ban/fail2ban.conf or /etc/fail2ban/jail.conf because a future update may overwrite those so we'll create a local copy instead. Create the file /etc/fail2ban/jail.local and add this to it after replacing [LAN SEGMENT] and [your email] with the appropriate values:

[DEFAULT]
# the IP address range we want to ignore
ignoreip = 127.0.0.1/8 [LAN SEGMENT]

# who to send e-mail to
destemail = [your e-mail]

# who is the email from
sender = [your e-mail]

# since we're using exim4 to send emails
mta = mail

# get email alerts
action = %(action_mwl)s

Note: Your server will need to be able to send e-mails so Fail2ban can let you know of suspicious activity and when it banned an IP.

We need to create a jail for SSH that tells fail2ban to look at SSH logs and use ufw to ban/unban IPs as needed. Create a jail for SSH by creating the file /etc/fail2ban/jail.d/ssh.local and adding this to it:

[sshd]
enabled = true
banaction = ufw
port = ssh
filter = sshd
logpath = %(sshd_log)s
maxretry = 5

For the lazy:

cat << EOF | sudo tee /etc/fail2ban/jail.d/ssh.local
[sshd]
enabled = true
banaction = ufw
port = ssh
filter = sshd
logpath = %(sshd_log)s
maxretry = 5
EOF

In the above we tell fail2ban to use the ufw as the banaction. Fail2ban ships with an action configuration file for ufw. You can see it in /etc/fail2ban/action.d/ufw.conf

Enable fail2ban:

sudo fail2ban-client start
sudo fail2ban-client reload
sudo fail2ban-client add sshd # This may fail on some systems if the sshd jail was added by default

To check the status:

sudo fail2ban-client status
Status
|- Number of jail:      1
`- Jail list:   sshd
sudo fail2ban-client status sshd
Status for the jail: sshd
|- Filter
|  |- Currently failed: 0
|  |- Total failed:     0
|  `- File list:        /var/log/auth.log
`- Actions
   |- Currently banned: 0
   |- Total banned:     0
   `- Banned IP list:

Custom Jails

I have not needed to create a custom jail yet. Once I do, and I figure out how, I will update this guide. Or, if you know how please help contribute.

Unban an IP

To unban an IP use this command:

fail2ban-client set [jail] unbanip [IP]

[jail] is the name of the jail that has the banned IP and [IP] is the IP address you want to unban. For example, to unaban 192.168.1.100 from SSH you would do:

fail2ban-client set sshd unbanip 192.168.1.100

The Auditing

File/Folder Integrity Monitoring With AIDE (WIP)

Why

WIP

How It Works

WIP

Goals

WIP

References

Steps

Install AIDE.

On Debian based systems:

sudo apt install aide

Make a backup of AIDE's defaults file:

sudo cp -p /etc/default/aide /etc/default/aide-COPY-$(date +"%Y%m%d%H%M%S")

Go through /etc/default/aide and set AIDE's defaults per your requirements. If you want AIDE to run daily and e-mail you, be sure to set CRON_DAILY_RUN to yes.

Make a backup of AIDE's configuration files:

sudo cp -pr /etc/aide /etc/aide-COPY-$(date +"%Y%m%d%H%M%S")

On Debian based systems:

  • AIDE's configuration files are in /etc/aide/aide.conf.d/.
  • You'll want to go through AIDE's documentation and the configuration files in to set them per your requirements.
  • If you want new settings, to monitor a new folder for example, you'll want to add them to /etc/aide/aide.conf or /etc/aide/aide.conf.d/.
  • Take a backup of the stock configuration files: sudo cp -pr /etc/aide /etc/aide-COPY-$(date +"%Y%m%d%H%M%S").

Create a new database, and install it.

On Debian based systems:

sudo aideinit
Running aide --init...
Start timestamp: 2019-04-01 21:23:37 -0400 (AIDE 0.16)
AIDE initialized database at /var/lib/aide/aide.db.new
Verbose level: 6

Number of entries:      25973

---------------------------------------------------
The attributes of the (uncompressed) database(s):
---------------------------------------------------

/var/lib/aide/aide.db.new
  RMD160   : moyQ1YskQQbidX+Lusv3g2wf1gQ=
  TIGER    : 7WoOgCrXzSpDrlO6I3PyXPj1gRiaMSeo
  SHA256   : gVx8Fp7r3800WF2aeXl+/KHCzfGsNi7O
             g16VTPpIfYQ=
  SHA512   : GYfa0DJwWgMLl4Goo5VFVOhu4BphXCo3
             rZnk49PYztwu50XjaAvsVuTjJY5uIYrG
             tV+jt3ELvwFzGefq4ZBNMg==
  CRC32    : /cusZw==
  HAVAL    : E/i5ceF3YTjwenBfyxHEsy9Kzu35VTf7
             CPGQSW4tl14=
  GOST     : n5Ityzxey9/1jIs7LMc08SULF1sLBFUc
             aMv7Oby604A=


End timestamp: 2019-04-01 21:24:45 -0400 (run time: 1m 8s)

Test everything works with no changes.

On Debian based systems:

sudo aide.wrapper --check
Start timestamp: 2019-04-01 21:24:45 -0400 (AIDE 0.16)
AIDE found NO differences between database and filesystem. Looks okay!!
Verbose level: 6

Number of entries:      25973

---------------------------------------------------
The attributes of the (uncompressed) database(s):
---------------------------------------------------

/var/lib/aide/aide.db
  RMD160   : moyQ1YskQQbidX+Lusv3g2wf1gQ=
  TIGER    : 7WoOgCrXzSpDrlO6I3PyXPj1gRiaMSeo
  SHA256   : gVx8Fp7r3800WF2aeXl+/KHCzfGsNi7O
             g16VTPpIfYQ=
  SHA512   : GYfa0DJwWgMLl4Goo5VFVOhu4BphXCo3
             rZnk49PYztwu50XjaAvsVuTjJY5uIYrG
             tV+jt3ELvwFzGefq4ZBNMg==
  CRC32    : /cusZw==
  HAVAL    : E/i5ceF3YTjwenBfyxHEsy9Kzu35VTf7
             CPGQSW4tl14=
  GOST     : n5Ityzxey9/1jIs7LMc08SULF1sLBFUc
             aMv7Oby604A=


End timestamp: 2019-04-01 21:26:03 -0400 (run time: 1m 18s)

Test everything works after making some changes.

On Debian based systems:

sudo touch /etc/test.sh
sudo touch /root/test.sh

sudo aide.wrapper --check

sudo rm /etc/test.sh
sudo rm /root/test.sh

sudo aideinit -y -f
Start timestamp: 2019-04-01 21:37:37 -0400 (AIDE 0.16)
AIDE found differences between database and filesystem!!
Verbose level: 6

Summary:
  Total number of entries:      25972
  Added entries:                2
  Removed entries:              0
  Changed entries:              1

---------------------------------------------------
Added entries:
---------------------------------------------------

f++++++++++++++++: /etc/test.sh
f++++++++++++++++: /root/test.sh

---------------------------------------------------
Changed entries:
---------------------------------------------------

d =.... mc.. .. .: /root

---------------------------------------------------
Detailed information about changes:
---------------------------------------------------

Directory: /root
  Mtime    : 2019-04-01 21:35:07 -0400        | 2019-04-01 21:37:36 -0400
  Ctime    : 2019-04-01 21:35:07 -0400        | 2019-04-01 21:37:36 -0400


---------------------------------------------------
The attributes of the (uncompressed) database(s):
---------------------------------------------------

/var/lib/aide/aide.db
  RMD160   : qF9WmKaf2PptjKnhcr9z4ueCPTY=
  TIGER    : zMo7MvvYJcq1hzvTQLPMW7ALeFiyEqv+
  SHA256   : LSLLVjjV6r8vlSxlbAbbEsPcQUB48SgP
             pdVqEn6ZNbQ=
  SHA512   : Qc4U7+ZAWCcitapGhJ1IrXCLGCf1IKZl
             02KYL1gaZ0Fm4dc7xLqjiquWDMSEbwzW
             oz49NCquqGz5jpMIUy7UxA==
  CRC32    : z8ChEA==
  HAVAL    : YapzS+/cdDwLj3kHJEq8fufLp3DPKZDg
             U12KCSkrO7Y=
  GOST     : 74sLV4HkTig+GJhokvxZQm7CJD/NR0mG
             6jV7zdt5AXQ=


End timestamp: 2019-04-01 21:38:50 -0400 (run time: 1m 13s)

That's it. If you set CRON_DAILY_RUN to yes in /etc/default/aide then cron will execute /etc/cron.daily/aide every day and e-mail you the output.

Updating The Database

Every time you make changes to files/folders that AIDE monitors, you will need to update the database to capture those changes. To do that on Debian based systems:

sudo aideinit -y -f

Anti-Virus Scanning With ClamAV (WIP)

Why

WIP

How It Works

  • ClamAV is a virus scanner
  • ClamAV-Freshclam is a service that keeps the virus definitions updated
  • ClamAV-Daemon keeps the clamd process running to make scanning faster

Goals

WIP

Notes

  • These instructions do not tell you how to enable the ClamAV daemon service to ensure clamd is running all the time. clamd is only if you're running a mail server and does not provide real-time monitoring of files. Instead, you'd want to scan files manually or on a schedule.

References

Steps

Install ClamAV.

On Debian based systems:

sudo apt install clamav clamav-freshclam clamav-daemon

Make a backup of clamav-freshclam's configuration file /etc/clamav/freshclam.conf:

sudo cp --archive /etc/clamav/freshclam.conf /etc/clamav/freshclam.conf-COPY-$(date +"%Y%m%d%H%M%S")

clamav-freshclam's default settings are probably good enough but if you want to change them, you can either edit the file /etc/clamav/freshclam.conf or use dpkg-reconfigure:

sudo dpkg-reconfigure clamav-freshclam

Note: The default settings will update the definitions 24 times in a day. To change the interval, check the Checks setting in /etc/clamav/freshclam.conf or use dpkg-reconfigure.

Start the clamav-freshclam service:

sudo service clamav-freshclam start

You can make sure clamav-freshclam running:

sudo service clamav-freshclam status
● clamav-freshclam.service - ClamAV virus database updater
   Loaded: loaded (/lib/systemd/system/clamav-freshclam.service; enabled; vendor preset: enabled)   Active: active (running) since Sat 2019-03-16 22:57:07 EDT; 2min 13s ago
     Docs: man:freshclam(1)
           man:freshclam.conf(5)
           https://www.clamav.net/documents
 Main PID: 1288 (freshclam)
   CGroup: /system.slice/clamav-freshclam.service
           └─1288 /usr/bin/freshclam -d --foreground=true

Mar 16 22:57:08 host freshclam[1288]: Sat Mar 16 22:57:08 2019 -> ^Local version: 0.100.2 Recommended version: 0.101.1
Mar 16 22:57:08 host freshclam[1288]: Sat Mar 16 22:57:08 2019 -> DON'T PANIC! Read https://www.clamav.net/documents/upgrading-clamav
Mar 16 22:57:15 host freshclam[1288]: Sat Mar 16 22:57:15 2019 -> Downloading main.cvd [100%]
Mar 16 22:57:38 host freshclam[1288]: Sat Mar 16 22:57:38 2019 -> main.cvd updated (version: 58, sigs: 4566249, f-level: 60, builder: sigmgr)
Mar 16 22:57:40 host freshclam[1288]: Sat Mar 16 22:57:40 2019 -> Downloading daily.cvd [100%]
Mar 16 22:58:13 host freshclam[1288]: Sat Mar 16 22:58:13 2019 -> daily.cvd updated (version: 25390, sigs: 1520006, f-level: 63, builder: raynman)
Mar 16 22:58:14 host freshclam[1288]: Sat Mar 16 22:58:14 2019 -> Downloading bytecode.cvd [100%]
Mar 16 22:58:16 host freshclam[1288]: Sat Mar 16 22:58:16 2019 -> bytecode.cvd updated (version: 328, sigs: 94, f-level: 63, builder: neo)
Mar 16 22:58:24 host freshclam[1288]: Sat Mar 16 22:58:24 2019 -> Database updated (6086349 signatures) from db.local.clamav.net (IP: 104.16.219.84)
Mar 16 22:58:24 host freshclam[1288]: Sat Mar 16 22:58:24 2019 -> ^Clamd was NOT notified: Can't connect to clamd through /var/run/clamav/clamd.ctl: No such file or directory

Note: Don't worry about that Local version line. Check https://serverfault.com/questions/741299/is-there-a-way-to-keep-clamav-updated-on-debian-8 for more details.

Make a backup of clamav-daemon's configuration file /etc/clamav/clamd.conf:

sudo cp --archive /etc/clamav/clamd.conf /etc/clamav/clamd.conf-COPY-$(date +"%Y%m%d%H%M%S")

You can change clamav-daemon's settings by editing the file /etc/clamav/clamd.conf or useing dpkg-reconfigure:

sudo dpkg-reconfigure clamav-daemon

Scanning Files/Folders

  • To scan files/folders use the clamscan program.
  • clamscan runs as the user it is executed as so it needs read permissions to the files/folders it is scanning.
  • Using clamscan as root is dangerous because if a file is in fact a virus there is risk that it could use the root privileges.
  • To scan a file: clamscan /path/to/file.
  • To scan a directory: clamscan -r /path/to/folder.
  • You can use the -i switch to only print infected files.
  • Check clamscan's man pages for other switches/options.

Rootkit Detection With Rkhunter (WIP)

Why

WIP

How It Works

WIP

Goals

WIP

References

Steps

Install Rkhunter.

On Debian based systems:

sudo apt install rkhunter

Make a backup of rkhunter' defaults file:

sudo cp -p /etc/default/rkhunter /etc/default/rkhunter-COPY-$(date +"%Y%m%d%H%M%S")

rkhunter's configuration file is /etc/rkhunter.conf. Instead of making changes to it, create and use the file /etc/rkhunter.conf.local instead:

sudo cp -p /etc/rkhunter.conf /etc/rkhunter.conf.local

Go through the configuration file /etc/rkhunter.conf.local and set to your requirements. My recommendations:

SettingNote
UPDATE_MIRRORS=1 
MIRRORS_MODE=0 
MAIL-ON-WARNING=root 
COPY_LOG_ON_ERROR=1to save a copy of the log if there is an error
PKGMGR=...set to the appropriate value per the documentation
PHALANX2_DIRTEST=1read the documentation for why
WEB_CMD=""this is to address an issue with the Debian package that disables the ability for rkhunter to self-update.
USE_LOCKING=1to prevent issues with rkhunter running multiple times
SHOW_SUMMARY_WARNINGS_NUMBER=1to see the actual number of warnings found

You want rkhunter to run every day and e-mail you the result. You can write your own script or check https://www.tecmint.com/install-rootkit-hunter-scan-for-rootkits-backdoors-in-linux/ for a sample cron script you can use.

On Debian based system, rkhunter comes with cron scripts. To enable them check /etc/default/rkhunter or use dpkg-reconfigure and say Yes to all of the questions:

sudo dpkg-reconfigure rkhunter

After you've finished with all of the changes, make sure all the settings are valid:

sudo rkhunter -C

Update rkhunter and its database:

sudo rkhunter --versioncheck
sudo rkhunter --update
sudo rkhunter --propupd

If you want to do a manual scan and see the output:

sudo rkhunter --check

Rootkit Detection With chrootkit (WIP)

Why

WIP

How It Works

WIP

Goals

WIP

References

Steps

Install chkrootkit.

On Debian based systems:

sudo apt install chkrootkit

Do a manual scan:

sudo chkrootkit
ROOTDIR is `/'
Checking `amd'...                                           not found
Checking `basename'...                                      not infected
Checking `biff'...                                          not found
Checking `chfn'...                                          not infected
Checking `chsh'...                                          not infected
...
Checking `scalper'...                                       not infected
Checking `slapper'...                                       not infected
Checking `z2'...                                            chklastlog: nothing deleted
Checking `chkutmp'...                                       chkutmp: nothing deleted
Checking `OSX_RSPLUG'...                                    not infected

Make a backup of chkrootkit's configuration file /etc/chkrootkit.conf:

sudo cp --archive /etc/chkrootkit.conf /etc/chkrootkit.conf-COPY-$(date +"%Y%m%d%H%M%S")

You want chkrootkit to run every day and e-mail you the result.

On Debian based system, chkrootkit comes with cron scripts. To enable them check /etc/chkrootkit.conf or use dpkg-reconfigure and say Yes to the first question:

sudo dpkg-reconfigure chkrootkit

logwatch - system log analyzer and reporter

Why

Your server will be generating a lot of logs that may contain important information. Unless you plan on checking your server everyday, you'll want a way to get e-mail summary of your server's logs. To accomplish this we'll use logwatch.

How It Works

logwatch scans system log files and summarizes them. You can run it directly from the command line or schedule it to run on a recurring schedule. logwatch uses service files to know how to read/summarize a log file. You can see all of the stock service files in /usr/share/logwatch/scripts/services.

logwatch's configuration file /usr/share/logwatch/default.conf/logwatch.conf specifies default options. You can override them via command line arguments.

Goals

  • Logwatch configured to send a daily e-mail summary of all of the server's status and logs

Notes

References

Steps

Install logwatch.

On Debian based systems:

sudo apt install logwatch

To see a sample of what logwatch collects you can run it directly:

sudo /usr/sbin/logwatch --output stdout --format text --range yesterday --service all

 ################### Logwatch 7.4.3 (12/07/16) ####################
        Processing Initiated: Mon Mar  4 00:05:50 2019
        Date Range Processed: yesterday
                              ( 2019-Mar-03 )
                              Period is day.
        Detail Level of Output: 5
        Type of Output/Format: stdout / text
        Logfiles for Host: host
 ##################################################################

 --------------------- Cron Begin ------------------------
...
...
 ---------------------- Disk Space End -------------------------


 ###################### Logwatch End #########################

Go through logwatch's self-documented configuration file /usr/share/logwatch/default.conf/logwatch.conf before continuing. There is no need to change anything here but pay special attention to the Output, Format, MailTo, Range, and Service as those are the ones we'll be using. For our purposes, instead of specifying our options in the configuration file, we will pass them as command line arguments in the daily cron job that executes logwatch. That way, if the configuration file is ever modified (e.g. during an update), our options will still be there.

Make a backup of logwatch's daily cron file /etc/cron.daily/00logwatch and unset the execute bit:

sudo cp --archive /etc/cron.daily/00logwatch /etc/cron.daily/00logwatch-COPY-$(date +"%Y%m%d%H%M%S")
sudo chmod -x /etc/cron.daily/00logwatch-COPY*

By default, logwatch outputs to stdout. Since the goal is to get a daily e-mail, we need to change the output type that logwatch uses to send e-mail instead. We could do this through the configuration file above, but that would apply to every time it is run -- even when we run it manually and want to see the output to the screen. Instead, we'll change the cron job that executes logwatch to send e-mail. This way, when run manually, we'll still get output to stdout and when run by cron, it'll send an e-mail. We'll also make sure it checks for all services, and change the output format to html so it's easier to read regardless of what the configuration file says. In the file /etc/cron.daily/00logwatch find the execute line and change it to:

/usr/sbin/logwatch --output mail --format html --mailto root --range yesterday --service all
#!/bin/bash

#Check if removed-but-not-purged
test -x /usr/share/logwatch/scripts/logwatch.pl || exit 0

#execute
/usr/sbin/logwatch --output mail --format html --mailto root --range yesterday --service all

#Note: It's possible to force the recipient in above command
#Just pass --mailto address@a.com instead of --output mail

For the lazy:

sudo sed -i -r -e "s,^($(sudo which logwatch).*?),# \1         # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n$(sudo which logwatch) --output mail --format html --mailto root --range yesterday --service all         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")," /etc/cron.daily/00logwatch

You can test the cron job by executing it:

sudo /etc/cron.daily/00logwatch

Note: If logwatch fails to deliver mail due to the e-mail having long lines please check https://blog.dhampir.no/content/exim4-line-length-in-debian-stretch-mail-delivery-failed-returning-message-to-sender as documented in issue #29. If you you followed Gmail and Exim4 As MTA With Implicit TLS then we already took care of this in step #7.

ss - Seeing Ports Your Server Is Listening On

Why

Ports are how applications, services, and processes communicate with each other -- either locally within your server or with other devices on the network. When you have an application or service (like SSH or Apache) running on your server, they listen for requests on specific ports.

Obviously we don't want your server listening on ports we don't know about. We'll use ss to see all the ports that services are listening on. This will help us track down and stop rogue, potentially dangerous, services.

Goals

  • find out non-localhost what ports are open and listening for connections

References

Steps

To see the all the ports listening for traffic:

sudo ss -lntup
Netid  State      Recv-Q Send-Q     Local Address:Port     Peer Address:Port
udp    UNCONN     0      0                      *:68                  *:*        users:(("dhclient",pid=389,fd=6))
tcp    LISTEN     0      128                    *:22                  *:*        users:(("sshd",pid=4390,fd=3))
tcp    LISTEN     0      128                   :::22                 :::*        users:(("sshd",pid=4390,fd=4))

Switch Explanations:

  • l = display listening sockets
  • n = do now try to resolve service names
  • t = display TCP sockets
  • u = display UDP sockets
  • p = show process information

If you see anything suspicious, like a port you're not aware of or a process you don't know, investigate and remediate as necessary.

Lynis - Linux Security Auditing

Why

From https://cisofy.com/lynis/:

Lynis is a battle-tested security tool for systems running Linux, macOS, or Unix-based operating system. It performs an extensive health scan of your systems to support system hardening and compliance testing.

Goals

  • Lynis installed

Notes

References

Steps

Install lynis. https://cisofy.com/lynis/#installation has detailed instructions on how to install it for your distribution.

On Debian based systems, using CISOFY's community software repository:

sudo apt install apt-transport-https ca-certificates host
sudo wget -O - https://packages.cisofy.com/keys/cisofy-software-public.key | sudo apt-key add -
sudo echo "deb https://packages.cisofy.com/community/lynis/deb/ stable main" | sudo tee /etc/apt/sources.list.d/cisofy-lynis.list
sudo apt update
sudo apt install lynis host

Update it:

sudo lynis update info

Run a security audit:

sudo lynis audit system

This will scan your server, report its audit findings, and at the end it will give you suggestions. Spend some time going through the output and address gaps as necessary.

OSSEC - Host Intrusion Detection

Why

From https://github.com/ossec/ossec-hids

OSSEC is a full platform to monitor and control your systems. It mixes together all the aspects of HIDS (host-based intrusion detection), log monitoring and SIM/SIEM together in a simple, powerful and open source solution.

Goals

  • OSSEC-HIDS installed

References

Steps

Install OSSEC-HIDS from sources

sudo apt install libz-dev libssl-dev libpcre2-dev build-essential
wget https://github.com/ossec/ossec-hids/archive/3.6.0.tar.gz
tar xzf 3.6.0.tar.gz
cd ossec-hids-3.6.0/
sudo ./install.sh

Useful commands:

Agent information

 sudo /var/ossec/bin/agent_control -i <AGENT_ID>

AGENT_ID by default is 000, to be sure the command sudo /var/ossec/bin/agent_control -l can be used.

Run integrity/rootkit checking

OSSEC by default run rootkit check each 2 hours.

 sudo /var/ossec/bin/agent_control -u <AGENT_ID> -r 

Alerts

  • All:
tail -f /var/ossec/logs/alerts/alerts.log
  • Integrity check:
sudo cat /var/ossec/logs/alerts/alerts.log | grep -A4  -i integrity
  • Rootkit check:
 sudo cat /var/ossec/logs/alerts/alerts.log | grep -A4  "rootcheck,"

The Danger Zone

Proceed At Your Own Risk

This sections cover things that are high risk because there is a possibility they can make your system unusable, or are considered unnecessary by many because the risks outweigh any rewards.

!! PROCEED AT YOUR OWN RISK !!

!! PROCEED AT YOUR OWN RISK !!

Linux Kernel sysctl Hardening

!! PROCEED AT YOUR OWN RISK !!

Why

The kernel is the brains of a Linux system. Securing it just makes sense.

Why Not

Changing kernel settings with sysctl is risky and could break your server. If you don't know what you are doing, don't have the time to debug issues, or just don't want to take the risks, I would advise from not following these steps.

Disclaimer

I am not as knowledgeable about hardening/securing a Linux kernel as I'd like. As much as I hate to admit it, I do not know what all of these settings do. My understanding is that most of them are general kernel hardening and performance, and the others are to protect against spoofing and DOS attacks.

In fact, since I am not 100% sure exactly what each setting does, I took recommended settings from numerous sites (all linked in the references below) and combined them to figure out what should be set. I figure if multiple reputable sites mention the same setting, it's probably safe.

If you have a better understanding of what these settings do, or have any other feedback/advice on them, please let me know.

I won't provide For the lazy code in this section.

Notes

  • Documentation on all the sysctl settings/keys is severely lacking. The documentation I can find seems to reference the 2.2 version kernel. I could not find anything newer. If you know where I can, please let me know.
  • The reference sites listed below have more comments on what each setting does.

References

Steps

The sysctl settings can be found in the linux-kernel-sysctl-hardening.md file in this repo.

Before you make a kernel sysctl change permanent, you can test it with the sysctl command:

sudo sysctl -w [key=value]

Example:

sudo sysctl -w kernel.ctrl-alt-del=0

Note: There are no spaces in key=value, including before and after the space.

Once you have tested a setting, and made sure it works without breaking your server, you can make it permanent by adding the values to /etc/sysctl.conf. For example:

$ sudo cat /etc/sysctl.conf
kernel.ctrl-alt-del = 0
fs.file-max = 65535
...
kernel.sysrq = 0

After updating the file you can reload the settings or reboot. To reload:

sudo sysctl -p

Note: If sysctl has trouble writing any settings then sysctl -w or sysctl -p will write an error to stderr. You can use this to quickly find invalid settings in your /etc/sysctl.conf file:

sudo sysctl -p >/dev/null

Password Protect GRUB

!! PROCEED AT YOUR OWN RISK !!

Why

If a bad actor has physical access to your server, they could use GRUB to gain unauthorized access to your system.

Why Not

If you forget the password, you'll have to go through some work to recover the password.

Goals

  • auto boot the default Debian install and require a password for anything else

Notes

  • This will only protect GRUB and anything behind it like your operating systems. Check your motherboard's documentation for password protecting your BIOS to prevent a bad actor from circumventing GRUB.

References

Steps

Create a Password-Based Key Derivation Function 2 (PBKDF2) hash of your password:

grub-mkpasswd-pbkdf2 -c 100000

The below output is from using password as the password:

Enter password:
Reenter password:
PBKDF2 hash of your password is grub.pbkdf2.sha512.100000.2812C233DFC899EFC3D5991D8CA74068C99D6D786A54F603E9A1EFE7BAEDDB6AA89672F92589FAF98DB9364143E7A1156C9936328971A02A483A84C3D028C4FF.C255442F9C98E1F3C500C373FE195DCF16C56EEBDC55ABDD332DD36A92865FA8FC4C90433757D743776AB186BD3AE5580F63EF445472CC1D151FA03906D08A6D
grub-mkpasswd-pbkdf2 -c 100000

Copy everything after PBKDF2 hash of your password is , starting from and including grub.pbkdf2.sha512... to the end. You'll need this in the next step.

The update-grub program uses scripts to generate configuration files it will use for GRUB's settings. Create the file /etc/grub.d/01_password and add the below code after replacing [hash] with the hash you copied from the first step. This tells update-grub to use this username and password for GRUB.

#!/bin/sh
set -e

cat << EOF
set superusers="grub"
password_pbkdf2 grub [hash]
EOF

For example:

#!/bin/sh
set -e

cat << EOF
set superusers="grub"
password_pbkdf2 grub grub.pbkdf2.sha512.100000.2812C233DFC899EFC3D5991D8CA74068C99D6D786A54F603E9A1EFE7BAEDDB6AA89672F92589FAF98DB9364143E7A1156C9936328971A02A483A84C3D028C4FF.C255442F9C98E1F3C500C373FE195DCF16C56EEBDC55ABDD332DD36A92865FA8FC4C90433757D743776AB186BD3AE5580F63EF445472CC1D151FA03906D08A6D
EOF

Set the file's execute bit so update-grub includes it when it updates GRUB's configuration:

sudo chmod a+x /etc/grub.d/01_password

Make a backup of GRUB's configuration file /etc/grub.d/10_linux that we'll be modifying and unset the execute bit so update-grub doesn't try to run it:

sudo cp --archive /etc/grub.d/10_linux /etc/grub.d/10_linux-COPY-$(date +"%Y%m%d%H%M%S")
sudo chmod a-x /etc/grub.d/10_linux.*

To make the default Debian install unrestricted (without the password) while keeping everything else restricted (with the password) modify /etc/grub.d/10_linux and add --unrestricted to the CLASS variable.

For the lazy:

sudo sed -i -r -e "/^CLASS=/ a CLASS=\"\${CLASS} --unrestricted\"         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" /etc/grub.d/10_linux

Update GRUB with update-grub:

sudo update-grub


 

Disable Root Login

!! PROCEED AT YOUR OWN RISK !!

Why

If you have sudo configured properly, then the root account will mostly never need to log in directly -- either at the terminal or remotely.

Why Not

Be warned, this can cause issues with some configurations!

If your installation uses sulogin (like Debian) to drop to a root console during boot failures, then locking the root account will prevent sulogin from opening the root shell and you will get this error:

Cannot open access to console, the root account is locked.

See sulogin(8) man page for more details.

Press Enter to continue.

To work around this, you can use the --force option for sulogin. Some distributions already include this, or some other, workaround.

An alternative to locking the root acount is set a long/complicated root password and store it in a secured, non digital format. That way you have it when/if you need it.

Goals

  • locked root account that nobody can use to log in as root

Notes

  • Some distributions disable root login by default (e.g. Ubuntu) so you may not need to do this step. Check with your distribution's documentation.

References

Steps

Lock the root account:

sudo passwd -l root

Change Default umask

!! PROCEED AT YOUR OWN RISK !!

Why

umask controls the default permissions of files/folders when they are created. Insecure file/folder permissions give other accounts potentially unauthorized access to your data. This may include the ability to make configuration changes.

  • For non-root accounts, there is no need for other accounts to get any access to the account's files/folders by default.
  • For the root account, there is no need for the file/folder primary group or other accounts to have any access to root's files/folders by default.

When and if other accounts need access to a file/folder, you want to explicitly grant it using a combination of file/folder permissions and primary group.

Why Not

Changing the default umask can create unexpected problems. For example, if you set umask to 0077 for root, then non-root accounts will not have access to application configuration files/folders in /etc/ which could break applications that do not run with root privileges.

How It Works

In order to explain how umask works I'd have to explain how Linux file/folder permissions work. As that is a rather complicated question, I will defer you to the references below for further reading.

Goals

  • set default umask for non-root accounts to 0027
  • set default umask for the root account to 0077

Notes

  • umask is a Bash built-in which means a user can change their own umask setting.

References

Steps

Make a backup of files we'll be editing:

sudo cp --archive /etc/profile /etc/profile-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /etc/bash.bashrc /etc/bash.bashrc-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /etc/login.defs /etc/login.defs-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /root/.bashrc /root/.bashrc-COPY-$(date +"%Y%m%d%H%M%S")

Set default umask for non-root accounts to 0027 by adding this line to /etc/profile and /etc/bash.bashrc:

umask 0027

For the lazy:

echo -e "\numask 0027         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/profile /etc/bash.bashrc

We also need to add this line to /etc/login.defs:

UMASK 0027

For the lazy:

echo -e "\nUMASK 0027         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/login.defs

Set default umask for the root account to 0077 by adding this line to /root/.bashrc:

umask 0077

For the lazy:

echo -e "\numask 0077         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /root/.bashrc

 

Orphaned Software

!! PROCEED AT YOUR OWN RISK !!

Why

As you use your system, and you install and uninstall software, you'll eventually end up with orphaned, or unused software/packages/libraries. You don't need to remove them, but if you don't need them, why keep them? When security is a priority, anything not explicitly needed is a potential security threat. You want to keep your server as trimmed and lean as possible.

Notes

  • Each distribution manages software/packages/libraries differently so how you find and remove orphaned packages will be different. So far I only have steps for Debian based systems.

Debian Based Systems

On Debian based systems, you can use deborphan to find orphaned packages.

Why Not

Keep in mind, deborphan finds packages that have no package dependencies. That does not mean they are not used. You could very well have a package you use every day that has no dependencies that you wouldn't want to remove. And, if deborphan gets anything wrong, then removing critical packages may break your system.

Steps

Install deborphan.

sudo apt install deborphan

Run deborphan as root to see a list of orphaned packages:

sudo deborphan
libxapian30
libpipeline1

Assuming you want to remove all of the packages deborphan finds, you can pass it's output to apt to remove them:

sudo apt --autoremove purge $(deborphan)


 

The Miscellaneous

The Simple way with MSMTP

(#msmtp-alternative)

Why

Well I will SIMPLIFY this method, to only output email using google mail account (and others). True Simple! :)

``` bash
#!/bin/bash
###### PLEASE .... EDIT IT...
USRMAIL="usernameemail"
DOMPROV="gmail.com"
PWDEMAIL="passwordStrong"  ## ATTENTION DONT USE Special Chars.. like as SPACE # and some others not all. Feel free to test ;)
MAILPROV="smtp.google.com:583"
MYMAIL="$USRMAIL@$DOMPROV"
USERLOC="root"
#######
apt install -y msmtp
    ln -s /usr/bin/msmtp /usr/sbin/sendmail
#wget http://www.cacert.org/revoke.crl -O /etc/ssl/certs/revoke.crl
#chmod 644 /etc/ssl/certs/revoke.crl
touch /root/.msmtprc
cat <<EOF> .msmtprc
defaults
account gmail
host $MAILPROV
port $MAILPORT
#proxy_host 127.0.0.1
#proxy_port 9001
from $MYEMAIL
timeout off 
protocol smtp
#auto_from [(on|off)]
#from envelope_from
#maildomain [domain]
auth on
user $USRMAIL
passwordeval "gpg -q --for-your-eyes-only --no-tty -d /root/msmtp-mail.gpg"
#passwordeval "gpg --quiet --for-your-eyes-only --no-tty --decrypt /root/msmtp-mail.gpg"
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
#tls_crl_file /etc/ssl/certs/revoke.crl
#tls_fingerprint [fingerprint]
#tls_key_file [file]
#tls_cert_file [file]
tls_certcheck on
tls_force_sslv3 on
tls_min_dh_prime_bits 512
#tls_priorities [priorities]
#dsn_notify (off|condition)
#dsn_return (off|amount)
#domain argument
#keepbcc off
logfile /var/log/mail.log
syslog on
account default : gmail
EOF
chmod 0400 /root/.msmtprc

   ## In testing .. auto command
# echo -e "1\n4096\n\ny\n$MYUSRMAIL\n$MYEMAIL\nmy key\nO\n$PWDMAIL\n$PWDMAIL\n" | gpg --full-generate-key 
##
gpg --full-generate-key
gpg --output revoke.asc --gen-revoke $MYEMAIL
echo -e "$PWDEMAIL\n" | gpg -e -o /root/msmtp-mail.gpg --recipient $MYEMAIL
echo "export GPG_TTY=\$(tty)" >> .baschrc    
chmod 400 msmtp-mail.gpg

echo "Hello there" | msmtp --debug $MYEMAIL
echo"######################
## MSMTP Configured ##
######################"
```

DONE!! ;)

Gmail and Exim4 As MTA With Implicit TLS

Why

Unless you're planning on setting up your own mail server, you'll need a way to send e-mails from your server. This will be important for system alerts/messages.

You can use any Gmail account. I recommend you create one specific for this server. That way if your server is compromised, the bad-actor won't have any passwords for your primary account. Granted, if you have 2FA/MFA enabled and you use an app password, there isn't much a bad-actor can do with just the app password, but why take the risk?

There are many guides on-line that cover how to configure Gmail as MTA using STARTTLS including a previous version of this guide. With STARTTLS, an initial unencrypted connection is made and then upgraded to an encrypted TLS or SSL connection. Instead, with the approach outlined below, an encrypted TLS connection is made from the start.

Also, as discussed in issue #29 and here, exim4 will fail for messages with long lines. We'll fix this in this section too.

Goals

  • mail configured to send e-mails from your server using Gmail
  • long line support for exim4

References

Steps

Install exim4. You will also need openssl and ca-certificates.

On Debian based systems:

sudo apt install exim4 openssl ca-certificates

Configure exim4:

For Debian based systems:

sudo dpkg-reconfigure exim4-config

You'll be prompted with some questions:

PromptAnswer
General type of mail configurationmail sent by smarthost; no local mail
System mail namelocalhost
IP-addresses to listen on for incoming SMTP connections127.0.0.1; ::1
Other destinations for which mail is accepted(default)
Visible domain name for local userslocalhost
IP address or host name of the outgoing smarthostsmtp.gmail.com::465
Keep number of DNS-queries minimal (Dial-on-Demand)?No
Split configuration into small files?No

Make a backup of /etc/exim4/passwd.client:

sudo cp --archive /etc/exim4/passwd.client /etc/exim4/passwd.client-COPY-$(date +"%Y%m%d%H%M%S")

Add a line like this to /etc/exim4/passwd.client

Notes:

  • Replace yourAccount@gmail.com and yourPassword with your details. If you have 2FA/MFA enabled on your Gmail then you'll need to create and use an app password here.
  • Always check host smtp.gmail.com for the most up-to-date domains to list.
smtp.gmail.com:yourAccount@gmail.com:yourPassword
*.google.com:yourAccount@gmail.com:yourPassword

This file has your Gmail password so we need to lock it down:

sudo chown root:Debian-exim /etc/exim4/passwd.client
sudo chmod 640 /etc/exim4/passwd.client

The next step is to create an TLS certificate that exim4 will use to make the encrypted connection to smtp.gmail.com. You can use your own certificate, like one from Let's Encrypt, or create one yourself using openssl. We will use a script that comes with exim4 that calls openssl to make our certificate:

sudo bash /usr/share/doc/exim4-base/examples/exim-gencert
[*] Creating a self signed SSL certificate for Exim!
    This may be sufficient to establish encrypted connections but for
    secure identification you need to buy a real certificate!

    Please enter the hostname of your MTA at the Common Name (CN) prompt!

Generating a RSA private key
..........................................+++++
................................................+++++
writing new private key to '/etc/exim4/exim.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Code (2 letters) [US]:[redacted]
State or Province Name (full name) []:[redacted]
Locality Name (eg, city) []:[redacted]
Organization Name (eg, company; recommended) []:[redacted]
Organizational Unit Name (eg, section) []:[redacted]
Server name (eg. ssl.domain.tld; required!!!) []:localhost
Email Address []:[redacted]
[*] Done generating self signed certificates for exim!
    Refer to the documentation and example configuration files
    over at /usr/share/doc/exim4-base/ for an idea on how to enable TLS
    support in your mail transfer agent.

Instruct exim4 to use TLS and port 465, and fix exim4's long lines issue, by creating the file /etc/exim4/exim4.conf.localmacros and adding:

MAIN_TLS_ENABLE = 1
REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS = *
TLS_ON_CONNECT_PORTS = 465
REQUIRE_PROTOCOL = smtps
IGNORE_SMTP_LINE_LENGTH_LIMIT = true

For the lazy:

cat << EOF | sudo tee /etc/exim4/exim4.conf.localmacros
MAIN_TLS_ENABLE = 1
REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS = *
TLS_ON_CONNECT_PORTS = 465
REQUIRE_PROTOCOL = smtps
IGNORE_SMTP_LINE_LENGTH_LIMIT = true
EOF

Make a backup of exim4's configuration file /etc/exim4/exim4.conf.template:

sudo cp --archive /etc/exim4/exim4.conf.template /etc/exim4/exim4.conf.template-COPY-$(date +"%Y%m%d%H%M%S")

Add the below to /etc/exim4/exim4.conf.template after the .ifdef REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS ... .endif block:

.ifdef REQUIRE_PROTOCOL
  protocol = REQUIRE_PROTOCOL
.endif
.ifdef REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS
  hosts_require_tls = REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS
.endif
.ifdef REQUIRE_PROTOCOL
    protocol = REQUIRE_PROTOCOL
.endif
.ifdef REMOTE_SMTP_HEADERS_REWRITE
  headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE
.endif

For the lazy:

sudo sed -i -r -e '/^.ifdef REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS$/I { :a; n; /^.endif$/!ba; a\# added by '"$(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")"'\n.ifdef REQUIRE_PROTOCOL\n    protocol = REQUIRE_PROTOCOL\n.endif\n# end add' -e '}' /etc/exim4/exim4.conf.template
.ifdef REQUIRE_PROTOCOL
  protocol = REQUIRE_PROTOCOL
.endif

Add the below to /etc/exim4/exim4.conf.template inside the .ifdef MAIN_TLS_ENABLE block:

.ifdef MAIN_TLS_ENABLE
.ifdef TLS_ON_CONNECT_PORTS
    tls_on_connect_ports = TLS_ON_CONNECT_PORTS
.endif

For the lazy:

sudo sed -i -r -e "/\.ifdef MAIN_TLS_ENABLE/ a # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n.ifdef TLS_ON_CONNECT_PORTS\n    tls_on_connect_ports = TLS_ON_CONNECT_PORTS\n.endif\n# end add" /etc/exim4/exim4.conf.template
.ifdef TLS_ON_CONNECT_PORTS
  tls_on_connect_ports = TLS_ON_CONNECT_PORTS
.endif

Update exim4 configuration to use TLS and then restart the service:

sudo update-exim4.conf
sudo service exim4 restart

If you're using UFW, you'll need to allow outbound traffic on 465. To do this we'll create a custom UFW application profile and then enable it. Create the file /etc/ufw/applications.d/smtptls, add this, then run ufw allow out smtptls comment 'open TLS port 465 for use with SMPT to send e-mails':

For the lazy:

cat << EOF | sudo tee /etc/ufw/applications.d/smtptls
[SMTPTLS]
title=SMTP through TLS
description=This opens up the TLS port 465 for use with SMPT to send e-mails.
ports=465/tcp
EOF

sudo ufw allow out smtptls comment 'open TLS port 465 for use with SMPT to send e-mails'
[SMTPTLS]
title=SMTP through TLS
description=This opens up the TLS port 465 for use with SMPT to send e-mails.
ports=465/tcp

Add some mail aliases so we can send e-mails to local accounts by adding lines like this to /etc/aliases:

You'll need to add all the local accounts that exist on your server.

user1: user1@gmail.com
user2: user2@gmail.com
...

Test your setup:

echo "test" | mail -s "Test" email@gmail.com
sudo tail /var/log/exim4/mainlog

Separate iptables Log File

Why

There will come a time when you'll need to look through your iptables logs. Having all the iptables logs go to their own file will make it a lot easier to find what you're looking for.

References

Steps

The first step is by telling your firewall to prefix all log entries with some unique string. If you're using iptables directly, you would do something like --log-prefix "[IPTABLES] " for all the rules. We took care of this in step step 4 of installing psad.

After you've added a prefix to the firewall logs, we need to tell rsyslog to send those lines to its own file. Do this by creating the file /etc/rsyslog.d/10-iptables.conf and adding this:

:msg, contains, "[IPTABLES] " /var/log/iptables.log
& stop

If you're expecting a lot if data being logged by your firewall, prefix the filename with a - "to omit syncing the file after every logging". For example:

:msg, contains, "[IPTABLES] " -/var/log/iptables.log
& stop

Note: Remember to change the prefix to whatever you use.

For the lazy:

cat << EOF | sudo tee /etc/rsyslog.d/10-iptables.conf
:msg, contains, "[IPTABLES] " /var/log/iptables.log
& stop
EOF
:msg, contains, "[IPTABLES] " -/var/log/iptables.log
& stop
:msg, contains, "[IPTABLES] " /var/log/iptables.log
& stop

Since we're logging firewall messages to a different file, we need to tell psad where the new file is. Edit /etc/psad/psad.conf and set IPT_SYSLOG_FILE to the path of the log file. For example:

Note: Remember to change the prefix to whatever you use.

For the lazy:

sudo sed -i -r -e "s/^(IPT_SYSLOG_FILE\s+)([^;]+)(;)$/# \1\2\3       # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n\1\/var\/log\/iptables.log\3       # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/" /etc/psad/psad.conf 
IPT_SYSLOG_FILE /var/log/iptables.log;

Restart psad and rsyslog to activate the changes (or reboot):

sudo psad -R
sudo psad --sig-update
sudo psad -H
sudo service rsyslog restart

The last thing we have to do is tell logrotate to rotate the new log file so it doesn't get to big and fill up our disk. Create the file /etc/logrotate.d/iptables and add this:

For the lazy:

cat << EOF | sudo tee /etc/logrotate.d/iptables
/var/log/iptables.log
{
    rotate 7
    daily
    missingok
    notifempty
    delaycompress
    compress
    postrotate
        invoke-rc.d rsyslog rotate > /dev/null
    endscript
}
EOF
/var/log/iptables.log
{
    rotate 7
    daily
    missingok
    notifempty
    delaycompress
    compress
    postrotate
        invoke-rc.d rsyslog rotate > /dev/null
    endscript
}

(Table of Contents)

Left Over

Contacting Me

For any questions, comments, concerns, feedback, or issues, submit a new issue.

(Table of Contents)

Helpful Links

(Table of Contents)

Acknowledgments

 

Download Details: 
Author: imthenachoman
Source Code: https://github.com/imthenachoman/How-To-Secure-A-Linux-Server 
License: CC-BY-SA-4.0 License
#linux #security 

Spring vs Spring BooDifference Between Spring and Spring Boot

As an extension of the Spring Framework, Spring Boot is widely used to make development on Spring faster, more efficient and convenient. In this article, we will look at some of the parameters were using Spring Boot can drastically reduce the time and effort required in application development.

What is Spring?

Spring Boot

Difference between Spring and Spring Boot

Advantages of Spring Boot over Spring

Conclusion

#full stack development #spring #spring and spring boot #spring boot

Spring Native turns Spring apps into native executables

Spring Native beta release leverages GraalVM to compile Spring Java and Kotlin applications to native images, reducing startup time and memory overhead compared to the JVM.

Spring Native, for compiling Spring Java applications to standalone executables called native images, is now available as a beta release. Native images promise faster startup times and lower runtime memory overhead compared to the JVM.

Launched March 11 and available on start.spring.io, the Spring Native beta compiles Spring applications to native images using the GraalVM multi-language runtime. These standalone executables offer benefits including nearly instant startup (typically fewer than 100ms), instant peak performance, and lower memory consumption, at the cost of longer build times and fewer runtime optimizations than the JVM.

#spring native turns spring apps into native executables #spring native #spring #native executables #spring apps

Spring Framework Tutorial

What is the spring framework in Java?

The spring framework is one of the most versatile frameworks in java which is used to bring down the complexity of the development of enterprise-grade applications. The first production release of the spring framework was in March 2004 and since then, this robust and open-source framework has gained tremendous popularity, so much so that it is often referred to by developers all around the world as the “framework of frameworks”. Spring is a loosely coupled, open-source application framework of java. It is lightweight and the inversion of the control container for the Java platform. A large number of Java applications use the core features of the spring framework. In addition to that, extensions have also been developed to allow developers to develop Web Applications on top of the Java Enterprise Edition platform.

#spring #spring-framework #java #spring framework tutorial #why should one learn about the spring framework? #what is the spring framework in java?