1561957861
A few weeks ago one of our readers reached out on our support channel to tell us that our site wouldn’t work for them no matter what.
It simply wouldn’t load on their browser (Chrome).
After a little back and forth, we realized that our web server lacked IPv6 support. We weren’t listening to the requests made on IPv6. If you don’t know already IPv6 stands for Internet Protocol version 6, and it is intended to replace IPv4 network which is what our original web, as-is, has had a run on for the last two decades.
Google is out with stats on IPv6 adoption lately (as of October 2018) and the numbers are rising steadily. Over twenty five percent of the Internet is now using IPv6 and from the graph it appears that well over half would be onboard in the coming few years. More importantly, a % of those who are on IPv6 already are exclusively so and cannot see your content if the website isn’t configured to serve on the new protocol. (Updated per tweet.)
Through this quick post we will configure our web app/site for the new protocol.
This is how I set it up on Bubblin.
The first step is to add an AAAA Record on your DNS Manager. We needed a public IP on IPv6 so I made a request to our hosting provider (Linode) to provide me with one.
Once they responded, I went ahead and added our public IPv6 on the Remote Access Panel, like so:
I added the ugly looking records with IPv6 option (bottom three) as screenshot-ted above. Since changes to DNS take some time to percolate we’ll leave the DNS manager here and focus on configuring our app-server nginx
for IPv6 next.
Now Bubblin is delivered on a strict https protocol so we are effectively a permanent on redirecting all our traffic (from http →) to https.
We use the Letsencrypt and Certbot to secure Bubblin with industry-grade SSL.
Shown below is an excerpt from our nginx.conf.erb
on production:
…
# $ sudo vi ~/.etc/nginx/sites-available/bubblin_production
# add listen [::]:80 ipv6only=on; for requests via insecure protocol (http).
server {
listen 80;
listen [::]:80 ipv6only=on;
server_name <%= fetch(:nginx_server_name) %> www.<%= fetch(:nginx_server_name) %>;
rewrite ^(.*) https://$host$1$request_uri permanent;
}
# add listen [::]:443 to listen for requests over IPv6 on https.
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.<%= fetch(:nginx_server_name) %>;
# Other SSL related stuff here.
rewrite ^ https://$host$1$request_uri permanent;
}
# add listen [::]:443 ssl http2; on the final server block.
server {
// Plenty of nginx config here.
listen 443 ssl http2; # managed by Certbot
listen [::]:443 ssl http2;
# Add HSTS header with preloads
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
}
Notice the listen [::]:80 ipv6only=on;
directive inside the server block and the HSTS directive at the bottom.
To test your nginx
configuration:
$ sudo nginx -t
// success
$ sudo nginx -s reload
Hoping that your DNS percolated by the time the nginx was cofigured (sometimes it may take up to 24 hours), now it is time to test if the website is available on IPv6:
$ curl https://bubblin.io -6
The HTML page from your site should be served correctly.
That’s all folks.
Hi, I’m Marvin Danig, CEO and Cofounder of Bubblin.
You might want to follow and connect with me on Twitter or Github?
P.S.: Reading more books on web will help your attention span.
✅ 30s ad
☞ AJAX using Javascript and JQuery + 2 Projects
☞ Modern JavaScript: Building Real-World, Real-Time Apps
☞ Essentials in JavaScript ES6 - A Fun and Clear Introduction
☞ JavaScript the Basics - JavaScript for Beginners
☞ Beginning ES6, The Next Generation of JavaScript
#web-service
1603600800
Technology is hard. As technologists, I think we like it that way. It’s built‑in job security, right? Well, unfortunately, the modern application world has become unproductively hard. We need to make it easier.
That’s why I like describing the current developer paradox as the need to run safely with scissors.
Running with scissors is a simple metaphor for what is the admittedly difficult ask we make of software engineers. Developers need to run. Time to market and feature velocity are critical to the success of digital businesses. As a result, we don’t want to encumber developers with processes or technology choices that slow them down. Instead we empower them to pick tools and stacks that let them deliver code to customers as quickly as possible.
But there’s a catch. In the world of fast releases, multiple daily (or hourly or minutely!) changes, and fail‑fast development, we risk introducing application downtime into digital experiences – that risk is the metaphorical scissors that make it dangerous to run fast. On some level we know it’s wrong to make developers run with scissors. But the speed upside trumps the downtime downside.
That frames the dilemma of our era: we need our developers to run with scissors, but we don’t want anybody to get hurt. Is there a solution?
At NGINX, the answer is “yes”. I’m excited to announce eight new or significantly enhanced solutions built to unleash developer speed without sacrificing the governance, visibility, and control infrastructure teams require.
As my colleague, Gus Robertson, eloquently points out in his recent blog The Essence of Sprint Is Speed, self‑service is an important part of developer empowerment. He talks about developers as the engines of digital transformation. And if they’re not presented with easy-to-use, capable tools, they take matters into their own hands. The result is shadow IT and significant infrastructure risk.
Self‑service turns this on its head. It provides infrastructure teams with a way to release the application delivery and security technologies that developers need for A/B, canary, blue‑green, and circuit‑breaker patterns. But it does so within the guardrails that ensure the consistency, reliability, and security that ensure your apps remain running once in production.
#blog #news #opinion #red hat #nginx controller #nginx app protect #nginx sprint 2020 #nginx ingress controller #nginx service mesh #f5 dns cloud services #nginx analytics cloud service
1589355169
In this video, we will use requests python 3 and requests-html to download pdf files from Springer’s Website.
Recently, I came across a list of 408 free books available for download from Springer’s website.
So, I have created this script in which I have used requests python and requests-html to download the files.
#request-html #requests #requests-python #webscrapping #springer
1594274220
In this article, I will be implementing a simple flask application. I will be setting up a Gunicorn application server and will be
#flask #serving #applications #gunicorn #nginx #reverse
1623768540
Performing web requests in parallel improves performance dramatically. The proposed Python implementation uses Queue and Thread to create a simple method saving a lot of time.
I have recently posted several articles using the Open Trip Planner as a source for the analysis of public transport. Trip routing was obtained from OTP through its REST API. OTP was running on the local machine but it still took a lot of time to make all required requests. The shown implementation in the articles is sequential. For simplicity I posted this sequential implementation but in other cases I use a parallel implementation. This article shows a parallel implementation for performing a lot of webrequests.
Though I have some experience the tutorials I found were quite difficult to master. This article contains my lessons learned and can be used for performing parallel web requests.
#python #multithreading #parallel web requests in python #requests #web requests #parallel
1648625880
Configure NGINX and NGINX Plus to serve static content, with type-specific root directories, checks for file existence, and performance optimizations.
This section describes how to configure NGINX and NGINX Plus to serve static content, how to define which paths are searched to find requested files, how to set up index files, and how to tune NGINX and NGINX Plus, as well as the kernel, for optimal performance.
The root directive specifies the root directory that will be used to search for a file. To obtain the path of a requested file, NGINX appends the request URI to the path specified by the root
directive. The directive can be placed on any level within the http {}
, server {}
, or location {}
contexts. In the example below, the root
directive is defined for a virtual server. It applies to all location {}
blocks where the root
directive is not included to explicitly redefine the root:
server {
root /www/data;
location / {
}
location /images/ {
}
location ~ \.(mp3|mp4) {
root /www/media;
}
}
Here, NGINX searches for a URI that starts with /images/
in the /www/data/images/
directory in the file system. But if the URI ends with the .mp3
or .mp4
extension, NGINX instead searches for the file in the /www/media/
directory because it is defined in the matching location
block.
If a request ends with a slash, NGINX treats it as a request for a directory and tries to find an index file in the directory. The index directive defines the index file’s name (the default value is index.html
). To continue with the example, if the request URI is /images/some/path/
, NGINX delivers the file /www/data/images/some/path/index.html
if it exists. If it does not, NGINX returns HTTP code 404 (Not Found)
by default. To configure NGINX to return an automatically generated directory listing instead, include the on
parameter to the autoindex directive:
location /images/ {
autoindex on;
}
You can list more than one filename in the index
directive. NGINX searches for files in the specified order and returns the first one it finds.
location / {
index index.$geo.html index.htm index.html;
}
The $geo
variable used here here is a custom variable set through the geo directive. The value of the variable depends on the client’s IP address.
To return the index file, NGINX checks for its existence and then makes an internal redirect to the URI obtained by appending the name of the index file to the base URI. The internal redirect results in a new search of a location and can end up in another location as in the following example:
location / {
root /data;
index index.html index.php;
}
location ~ \.php {
fastcgi_pass localhost:8000;
#...
}
Here, if the URI in a request is /path/
, and /data/path/index.html
does not exist but /data/path/index.php
does, the internal redirect to /path/index.php
is mapped to the second location. As a result, the request is proxied.
The try_files directive can be used to check whether the specified file or directory exists; NGINX makes an internal redirect if it does, or returns a specified status code if it doesn’t. For example, to check the existence of a file corresponding to the request URI, use the try_files
directive and the $uri
variable as follows:
server {
root /www/data;
location /images/ {
try_files $uri /images/default.gif;
}
}
The file is specified in the form of the URI, which is processed using the root
or alias
directives set in the context of the current location or virtual server. In this case, if the file corresponding to the original URI doesn’t exist, NGINX makes an internal redirect to the URI specified by the last parameter, returning /www/data/images/default.gif
.
The last parameter can also be a status code (directly preceded by the equals sign) or the name of a location. In the following example, a 404
error is returned if none of the parameters to the try_files
directive resolve to an existing file or directory.
location / {
try_files $uri $uri/ $uri.html =404;
}
In the next example, if neither the original URI nor the URI with the appended trailing slash resolve into an existing file or directory, the request is redirected to the named location which passes it to a proxied server.
location / {
try_files $uri $uri/ @backend;
}
location @backend {
proxy_pass http://backend.example.com;
}
For more information, watch the Content Caching webinar on‑demand to learn how to dramatically improve the performance of a website, and get a deep‑dive into NGINX’s caching capabilities.
Loading speed is a crucial factor of serving any content. Making minor optimizations to your NGINX configuration may boost the productivity and help reach optimal performance.
sendfile
By default, NGINX handles file transmission itself and copies the file into the buffer before sending it. Enabling the sendfile directive eliminates the step of copying the data into the buffer and enables direct copying data from one file descriptor to another. Alternatively, to prevent one fast connection from entirely occupying the worker process, you can use the sendfile_max_chunk directive to limit the amount of data transferred in a single sendfile()
call (in this example, to 1
MB):
location /mp3 {
sendfile on;
sendfile_max_chunk 1m;
#...
}
tcp_nopush
Use the tcp_nopush directive together with the sendfile on;
directive. This enables NGINX to send HTTP response headers in one packet right after the chunk of data has been obtained by sendfile()
.
location /mp3 {
sendfile on;
tcp_nopush on;
#...
}
tcp_nodelay
The tcp_nodelay directive allows override of Nagle’s algorithm, originally designed to solve problems with small packets in slow networks. The algorithm consolidates a number of small packets into a larger one and sends the packet with a 200
ms delay. Nowadays, when serving large static files, the data can be sent immediately regardless of the packet size. The delay also affects online applications (ssh, online games, online trading, and so on). By default, the tcp_nodelay directive is set to on
which means that the Nagle’s algorithm is disabled. Use this directive only for keepalive connections:
location /mp3 {
tcp_nodelay on;
keepalive_timeout 65;
#...
}
One of the important factors is how fast NGINX can handle incoming connections. The general rule is when a connection is established, it is put into the “listen” queue of a listen socket. Under normal load, either the queue is small or there is no queue at all. But under high load, the queue can grow dramatically, resulting in uneven performance, dropped connections, and increased latency.
To display the current listen queue, run this command:
netstat -Lan
The output might be like the following, which shows that in the listen queue on port 80
there are 10
unaccepted connections against the configured maximum of 128
queued connections. This situation is normal.
Current listen queue sizes (qlen/incqlen/maxqlen)
Listen Local Address
0/0/128 *.12345
10/0/128 *.80
0/0/128 *.8080
In contrast, in the following command the number of unaccepted connections (192
) exceeds the limit of 128
. This is quite common when a web site experiences heavy traffic. To achieve optimal performance, you need to increase the maximum number of connections that can be queued for acceptance by NGINX in both your operating system and the NGINX configuration.
Current listen queue sizes (qlen/incqlen/maxqlen)
Listen Local Address
0/0/128 *.12345
192/0/128 *.80
0/0/128 *.8080
Increase the value of the net.core.somaxconn
kernel parameter from its default value (128
) to a value high enough for a large burst of traffic. In this example, it’s increased to 4096
.
For FreeBSD, run the command:
sudo sysctl kern.ipc.somaxconn=4096
For Linux:
Run the command:
sudo sysctl -w net.core.somaxconn=4096
Use a text editor to add the following line to /etc/sysctl.conf
:
net.core.somaxconn = 4096
If you set the somaxconn
kernel parameter to a value greater than 512
, change the backlog
parameter to the NGINX listen directive to match:
server {
listen 80 backlog=4096;
# ...
}
Source: https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/
#web #NGINX