1679592720
В системе Debian пакеты обычно устанавливаются через диспетчер пакетов apt, который устанавливает пакеты из официального репозитория Debian. Пакеты, установленные через apt, перемещаются в каталог кеша и управляются внутри каталога '/var/cache/apt/archives' . Причина размещения этих файлов в кэшированном каталоге состоит в том, чтобы гарантировать, что в следующий раз, когда вы установите зависимый пакет от существующего пакета, он не установит тот же пакет снова; вместо этого он заберет пакет из этого места. Со временем пакет теряет свою ценность, и придет время, когда он больше не будет требоваться в системе. Таким образом, рекомендуется отключить apt-кэш в системе Debian, так как это поможет освободить место.
Следуйте подробным инструкциям из этой статьи, чтобы отключить кеш apt в Debian.
Простая пошаговая инструкция по отключению кэша apt в Debian приведена ниже:
Шаг 1: Во-первых, вы должны создать файл 00clean-cache-dir в системе Debian с помощью редактора nano:
sudo nano /etc/apt/apt.conf.d/00clean-cache-dir
Шаг 2: В файле вы должны добавить следующую строку:
DPkg::Post-Invoke {"/bin/rm -f /var/cache/apt/archives/*.deb || true";};
Шаг 3: Затем сохраните чистый файл кеша, используя «CTRL + X» , добавьте «Y» и введите, чтобы выйти.
Шаг 4: Затем вам нужно создать еще один файл с именем «00disbale-cache-files» :
sudo nano 00disable-cache-files
Шаг 5: В этом файле добавьте следующие строки:
Dir::Cache::srcpkgcache "";
Dir::Cache::pkgcache "";
Шаг 6: Сохраните этот файл, используя Шаг 3.
Это отключит кеш apt в системе Debian.
Шаг 7: Теперь кеш apt отключен, лучше очистить каталог «/var/cache/apt/archives» в Debian с помощью следующей команды:
sudo rm -rf /var/cache/apt/archives
Шаг 8 (необязательно): В качестве альтернативы рекомендуется выполнить следующую команду для удаления кеша:
sudo apt clean --dry-run
Шаг 9 (необязательно) : Кроме того, вы также можете удалить файлы и каталоги кеша с помощью следующей команды:
sudo apt clean
Шаг 10 (необязательно): очистим систему, удалив кеш-файл и каталоги с помощью следующей команды.
sudo apt autoclean
Кэш apt в Debian можно легко отключить, создав чистый файл кеша в папке /etc/apt/apt.conf.d/ . Затем создайте еще один файл с отключенным кешем в домашней папке. Сохраните оба этих файла, чтобы отключить кеш apt в Debian. Лучше удалять файлы и каталоги кеша с помощью команды «rm -rf» или некоторых подходящих команд, которые являются необязательными, но являются хорошей практикой, если вы запускаете их на терминале.
Оригинальный источник статьи: https://linuxhint.com/
1679589000
在 Debian 系统中,软件包通常是通过 apt 软件包管理器安装的,它从官方的 Debian 存储库安装软件包。通过 apt 安装的包被移动到缓存目录并在位置'/var/cache/apt/archives'内进行管理。之所以将这些文件放在缓存目录中,是为了保证下次安装一个已经存在的包的依赖包时,不会再安装同一个包;相反,它将从该位置提取包裹。随着时间的流逝,软件包失去了它的价值,总有一天它不再需要系统了。因此,最好在 Debian 系统上禁用 apt 缓存,因为这将有助于释放一些空间。
按照本文的详细指南在 Debian 中禁用apt 缓存。
下面给出了在 Debian 中禁用apt 缓存的简单分步说明:
第一步:首先,你必须通过nano编辑器在Debian系统上创建一个00clean-cache-dir文件:
sudo nano /etc/apt/apt.conf.d/00clean-cache-dir
第 2 步:在文件中,您必须添加以下行:
DPkg::Post-Invoke {"/bin/rm -f /var/cache/apt/archives/*.deb || true";};
第三步:然后用“CTRL+X”保存干净的缓存文件,加“Y”回车退出。
第 4 步:然后您必须创建另一个名称为“00disbale-cache-files”的文件:
sudo nano 00disable-cache-files
第 5 步:在此文件中,添加以下行:
Dir::Cache::srcpkgcache "";
Dir::Cache::pkgcache "";
第 6 步:使用第 3 步保存此文件。
这将禁用Debian 系统上的apt 缓存。
第 7 步:现在 apt 缓存已被禁用,最好使用以下命令清空 Debian 上的“/var/cache/apt/archives”目录:
sudo rm -rf /var/cache/apt/archives
第 8 步(可选):或者,运行以下命令删除缓存是一个很好的做法:
sudo apt clean --dry-run
第 9 步(可选):此外,您还可以通过以下命令删除缓存文件和目录:
sudo apt clean
第 10 步(可选):让我们通过使用以下命令删除缓存文件和目录来清理系统。
sudo apt autoclean
通过在/etc/apt/apt.conf.d/位置创建一个干净的缓存文件,可以轻松禁用 Debian 上的 apt 缓存。然后在主位置创建另一个禁用缓存的文件。保存这两个文件以禁用Debian 上的apt 缓存。最好通过“rm -rf”命令或一些可选的 apt 命令删除缓存文件和目录,但如果您在终端上运行它们,这是一种很好的做法。
文章原文出处:https: //linuxhint.com/
1679570830
In the Debian system, the packages are normally installed through the apt package manager, which installs the packages from the official Debian repository. The packages installed through apt are moved to the cache directory and are managed inside the location ‘/var/cache/apt/archives’. The reason to put these files in the cached directory is to ensure that the next time you install a dependent package of an existing package, it won’t install the same package again; instead, it will pick up the package from this location. As time passes, the package loses its worthiness and the time will come when it won’t require any more on the system. Thus, it is a good practice to disable the apt cache on the Debian system, as this will help free up some space.
Follow this article’s detailed guidelines to disable apt cache in Debian.
An easy step-by-step instruction to disable apt cache in Debian is given below:
Step 1: First, you must create a 00clean-cache-dir file on the Debian system through nano editor:
sudo nano /etc/apt/apt.conf.d/00clean-cache-dir
Step 2: Within the file, you must add the following line:
DPkg::Post-Invoke {"/bin/rm -f /var/cache/apt/archives/*.deb || true";};
Step 3: Then save the clean cache file using “CTRL+X”, add “Y” and enter to exit.
Step 4: Then you have to create another file with “00disbale-cache-files” name:
sudo nano 00disable-cache-files
Step 5: Within this file, add the following lines:
Dir::Cache::srcpkgcache "";
Dir::Cache::pkgcache "";
Step 6: Save this file using Step 3.
This will disable the apt cache on the Debian system.
Step 7: Now the apt cache is disabled now, it’s better to empty the ‘/var/cache/apt/archives’ directory on Debian using the following command:
sudo rm -rf /var/cache/apt/archives
Step 8 (Optional): Alternatively, it’s a good practice if you run the following command to delete the cache:
sudo apt clean --dry-run
Step 9 (Optional): Further you can also remove the cache files and directories through the following command:
sudo apt clean
Step 10 (Optional): Let’s clean the system by removing the cache file and directories using the following command.
sudo apt autoclean
The apt cache on Debian can be disabled easily by creating a clean cache file inside the /etc/apt/apt.conf.d/ location. Then create another file with disable cache in the home location. Save both these files to disable the apt cache on Debian. It’s better to remove cache files and directories through “rm -rf” command or some apt commands that are optional but good practice if you run them on the terminal.
Original article source at: https://linuxhint.com/
1679174220
Ristretto is a fast, concurrent cache library built with a focus on performance and correctness.
The motivation to build Ristretto comes from the need for a contention-free cache in Dgraph.
Use Discuss Issues for reporting issues about this repository.
Config
values and you're off and running.Ristretto is production-ready. See Projects using Ristretto.
func main() {
cache, err := ristretto.NewCache(&ristretto.Config{
NumCounters: 1e7, // number of keys to track frequency of (10M).
MaxCost: 1 << 30, // maximum cost of cache (1GB).
BufferItems: 64, // number of keys per Get buffer.
})
if err != nil {
panic(err)
}
// set a value with a cost of 1
cache.Set("key", "value", 1)
// wait for value to pass through buffers
cache.Wait()
value, found := cache.Get("key")
if !found {
panic("missing value")
}
fmt.Println(value)
cache.Del("key")
}
The Config
struct is passed to NewCache
when creating Ristretto instances (see the example above).
NumCounters int64
NumCounters is the number of 4-bit access counters to keep for admission and eviction. We've seen good performance in setting this to 10x the number of items you expect to keep in the cache when full.
For example, if you expect each item to have a cost of 1 and MaxCost is 100, set NumCounters to 1,000. Or, if you use variable cost values but expect the cache to hold around 10,000 items when full, set NumCounters to 100,000. The important thing is the number of unique items in the full cache, not necessarily the MaxCost value.
MaxCost int64
MaxCost is how eviction decisions are made. For example, if MaxCost is 100 and a new item with a cost of 1 increases total cache cost to 101, 1 item will be evicted.
MaxCost can also be used to denote the max size in bytes. For example, if MaxCost is 1,000,000 (1MB) and the cache is full with 1,000 1KB items, a new item (that's accepted) would cause 5 1KB items to be evicted.
MaxCost could be anything as long as it matches how you're using the cost values when calling Set.
BufferItems int64
BufferItems is the size of the Get buffers. The best value we've found for this is 64.
If for some reason you see Get performance decreasing with lots of contention (you shouldn't), try increasing this value in increments of 64. This is a fine-tuning mechanism and you probably won't have to touch this.
Metrics bool
Metrics is true when you want real-time logging of a variety of stats. The reason this is a Config flag is because there's a 10% throughput performance overhead.
OnEvict func(hashes [2]uint64, value interface{}, cost int64)
OnEvict is called for every eviction.
KeyToHash func(key interface{}) [2]uint64
KeyToHash is the hashing algorithm used for every key. If this is nil, Ristretto has a variety of defaults depending on the underlying interface type.
Note that if you want 128bit hashes you should use the full [2]uint64
, otherwise just fill the uint64
at the 0
position and it will behave like any 64bit hash.
Cost func(value interface{}) int64
Cost is an optional function you can pass to the Config in order to evaluate item cost at runtime, and only for the Set calls that aren't dropped (this is useful if calculating item cost is particularly expensive and you don't want to waste time on items that will be dropped anyways).
To signal to Ristretto that you'd like to use this Cost function:
cost
of 0.The benchmarks can be found in https://github.com/dgraph-io/benchmarks/tree/master/cachebench/ristretto.
This trace is described as "disk read accesses initiated by a large commercial search engine in response to various web search requests."
This trace is described as "a database server running at a commercial site running an ERP application on top of a commercial database."
This trace demonstrates a looping access pattern.
This trace is described as "references to a CODASYL database for a one hour period."
All throughput benchmarks were ran on an Intel Core i7-8700K (3.7GHz) with 16gb of RAM.
Below is a list of known projects that use Ristretto:
We go into detail in the Ristretto blog post, but in short: our throughput performance can be attributed to a mix of batching and eventual consistency. Our hit ratio performance is mostly due to an excellent admission policy and SampledLFU eviction policy.
As for "shortcuts," the only thing Ristretto does that could be construed as one is dropping some Set calls. That means a Set call for a new item (updates are guaranteed) isn't guaranteed to make it into the cache. The new item could be dropped at two points: when passing through the Set buffer or when passing through the admission policy. However, this doesn't affect hit ratios much at all as we expect the most popular items to be Set multiple times and eventually make it in the cache.
No, it's just like any other Go library that you can import into your project and use in a single process.
Author: Dgraph-io
Source Code: https://github.com/dgraph-io/ristretto
License: Apache-2.0 license
1678724509
Cached network image
A flutter library to show images from the internet and keep them in the cache directory.
The CachedNetworkImage can be used directly or through the ImageProvider. Both the CachedNetworkImage as CachedNetworkImageProvider have minimal support for web. It currently doesn't include caching.
With a placeholder:
CachedNetworkImage(
imageUrl: "http://via.placeholder.com/350x150",
placeholder: (context, url) => CircularProgressIndicator(),
errorWidget: (context, url, error) => Icon(Icons.error),
),
Or with a progress indicator:
CachedNetworkImage(
imageUrl: "http://via.placeholder.com/350x150",
progressIndicatorBuilder: (context, url, downloadProgress) =>
CircularProgressIndicator(value: downloadProgress.progress),
errorWidget: (context, url, error) => Icon(Icons.error),
),
Image(image: CachedNetworkImageProvider(url))
When you want to have both the placeholder functionality and want to get the imageprovider to use in another widget you can provide an imageBuilder:
CachedNetworkImage(
imageUrl: "http://via.placeholder.com/200x150",
imageBuilder: (context, imageProvider) => Container(
decoration: BoxDecoration(
image: DecorationImage(
image: imageProvider,
fit: BoxFit.cover,
colorFilter:
ColorFilter.mode(Colors.red, BlendMode.colorBurn)),
),
),
placeholder: (context, url) => CircularProgressIndicator(),
errorWidget: (context, url, error) => Icon(Icons.error),
),
The cached network images stores and retrieves files using the flutter_cache_manager.
Does it really crash though? The debugger might pause, as the Dart VM doesn't recognize it as a caught exception; the console might print errors; even your crash reporting tool might report it (I know, that really sucks). However, does it really crash? Probably everything is just running fine. If you really get an app crashes you are fine to report an issue, but do that with a small example so we can reproduce that crash.
See for example this or this answer on previous posted issues.
Run this command:
With Flutter:
$ flutter pub add cached_network_image
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
cached_network_image: ^3.2.3
Alternatively, your editor might support flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:cached_network_image/cached_network_image.dart';
import 'package:cached_network_image/cached_network_image.dart';
import 'package:flutter/material.dart';
import 'package:baseflow_plugin_template/baseflow_plugin_template.dart';
import 'package:flutter_blurhash/flutter_blurhash.dart';
void main() {
CachedNetworkImage.logLevel = CacheManagerLogLevel.debug;
runApp(BaseflowPluginExample(
pluginName: 'CachedNetworkImage',
githubURL: 'https://github.com/Baseflow/flutter_cache_manager',
pubDevURL: 'https://pub.dev/packages/flutter_cache_manager',
pages: [
BasicContent.createPage(),
ListContent.createPage(),
GridContent.createPage(),
],
));
}
/// Demonstrates a [StatelessWidget] containing [CachedNetworkImage]
class BasicContent extends StatelessWidget {
const BasicContent({Key? key}) : super(key: key);
static ExamplePage createPage() {
return ExamplePage(Icons.image, (context) => const BasicContent());
}
@override
Widget build(BuildContext context) {
return SingleChildScrollView(
child: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
_blurHashImage(),
_sizedContainer(
const Image(
image: CachedNetworkImageProvider(
'https://via.placeholder.com/350x150',
),
),
),
_sizedContainer(
CachedNetworkImage(
progressIndicatorBuilder: (context, url, progress) => Center(
child: CircularProgressIndicator(
value: progress.progress,
),
),
imageUrl:
'https://images.unsplash.com/photo-1532264523420-881a47db012d?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9',
),
),
_sizedContainer(
CachedNetworkImage(
placeholder: (context, url) =>
const CircularProgressIndicator(),
imageUrl: 'https://via.placeholder.com/200x150',
),
),
_sizedContainer(
CachedNetworkImage(
imageUrl: 'https://via.placeholder.com/300x150',
imageBuilder: (context, imageProvider) => Container(
decoration: BoxDecoration(
image: DecorationImage(
image: imageProvider,
fit: BoxFit.cover,
colorFilter: const ColorFilter.mode(
Colors.red,
BlendMode.colorBurn,
),
),
),
),
placeholder: (context, url) =>
const CircularProgressIndicator(),
errorWidget: (context, url, error) => const Icon(Icons.error),
),
),
CachedNetworkImage(
imageUrl: 'https://via.placeholder.com/300x300',
placeholder: (context, url) => const CircleAvatar(
backgroundColor: Colors.amber,
radius: 150,
),
imageBuilder: (context, image) => CircleAvatar(
backgroundImage: image,
radius: 150,
),
),
_sizedContainer(
CachedNetworkImage(
imageUrl: 'https://notAvalid.uri',
placeholder: (context, url) =>
const CircularProgressIndicator(),
errorWidget: (context, url, error) => const Icon(Icons.error),
),
),
_sizedContainer(
CachedNetworkImage(
imageUrl: 'not a uri at all',
placeholder: (context, url) =>
const CircularProgressIndicator(),
errorWidget: (context, url, error) => const Icon(Icons.error),
),
),
_sizedContainer(
CachedNetworkImage(
maxHeightDiskCache: 10,
imageUrl: 'https://via.placeholder.com/350x200',
placeholder: (context, url) =>
const CircularProgressIndicator(),
errorWidget: (context, url, error) => const Icon(Icons.error),
fadeOutDuration: const Duration(seconds: 1),
fadeInDuration: const Duration(seconds: 3),
),
),
],
),
),
);
}
Widget _blurHashImage() {
return SizedBox(
width: double.infinity,
child: CachedNetworkImage(
placeholder: (context, url) => const AspectRatio(
aspectRatio: 1.6,
child: BlurHash(hash: 'LEHV6nWB2yk8pyo0adR*.7kCMdnj'),
),
imageUrl: 'https://blurha.sh/assets/images/img1.jpg',
fit: BoxFit.cover,
),
);
}
Widget _sizedContainer(Widget child) {
return SizedBox(
width: 300.0,
height: 150.0,
child: Center(child: child),
);
}
}
/// Demonstrates a [ListView] containing [CachedNetworkImage]
class ListContent extends StatelessWidget {
const ListContent({Key? key}) : super(key: key);
static ExamplePage createPage() {
return ExamplePage(Icons.list, (context) => const ListContent());
}
@override
Widget build(BuildContext context) {
return ListView.builder(
itemBuilder: (BuildContext context, int index) => Card(
child: Column(
children: <Widget>[
CachedNetworkImage(
imageUrl: 'https://loremflickr.com/320/240/music?lock=$index',
placeholder: (BuildContext context, String url) => Container(
width: 320,
height: 240,
color: Colors.purple,
),
),
],
),
),
itemCount: 250,
);
}
}
/// Demonstrates a [GridView] containing [CachedNetworkImage]
class GridContent extends StatelessWidget {
const GridContent({Key? key}) : super(key: key);
static ExamplePage createPage() {
return ExamplePage(Icons.grid_on, (context) => const GridContent());
}
@override
Widget build(BuildContext context) {
return GridView.builder(
itemCount: 250,
gridDelegate:
const SliverGridDelegateWithFixedCrossAxisCount(crossAxisCount: 3),
itemBuilder: (BuildContext context, int index) => CachedNetworkImage(
imageUrl: 'https://loremflickr.com/100/100/music?lock=$index',
placeholder: _loader,
errorWidget: _error,
),
);
}
Widget _loader(BuildContext context, String url) {
return const Center(
child: CircularProgressIndicator(),
);
}
Widget _error(BuildContext context, String url, dynamic error) {
return const Center(child: Icon(Icons.error));
}
}
Download Details:
Author: Baseflow
Source Code: https://github.com/Baseflow/flutter_cached_network_image
1678532760
next-boost
adds a cache layer to your SSR (Server-Side Rendering) applications. It was built originally for Next.js
and should work with any node.js http.Server
based application.
next-boost
achieves great performance by rendering webpages on worker_threads
while serving the cached on the main thread.
If you are familiar with Next.js
, next-boost
can be considered as an implementation of Incremental Static Regeneration which works with getServerSideProps
. And it's not meant to be used with getStaticProps
, in which Next.js will do the cache for you.
$ yarn add @next-boost/next-boost
$ yarn add @next-boost/redis-cache # using load-balancer and cluster
$ yarn add @next-boost/hybrid-disk-cache # simple site with disk cache
next start
worker_threads
for SSR.next-boost.redis.js
for sample config.next-boost.hdc.js
for sample confignext-boost
cli with Next.jsAfter install the package, just change the start script from next start
to next-boost
. All next start
's command line arguments, like -p
for specifing the port, are compatible.
"scripts": {
...
"start": "next-boost", // previously `next start`
...
},
There's an example under examples/nodejs
, which works with a plain http.Server
.
To use it with express.js
and next.js
, please check examples/with-express
.
By using worker_threads
, the CPU-heavy SSR rendering will not blocking the main process from serving the cache.
Here are the comparision of using ApacheBench
on a blog post fetched from database. HTML prerendered and the db operation takes around 10~20ms. The page takes around 200ms for Next.js to render.
$ /usr/local/bin/ab -n 200 -c 8 http://127.0.0.1:3000/blog/posts/2020/3/postname
Not a scientific benchmark, but the improvements are visibly huge.
with next start
(data fetched with getServerSideProps
):
Document Length: 76424 bytes
Concurrency Level: 8
Time taken for tests: 41.855 seconds
Complete requests: 200
Failed requests: 0
Total transferred: 15325600 bytes
HTML transferred: 15284800 bytes
Requests per second: 4.78 [#/sec] (mean)
Time per request: 1674.185 [ms] (mean)
Time per request: 209.273 [ms] (mean, across all concurrent requests)
Transfer rate: 357.58 [Kbytes/sec] received
with the drop-in next-boost
cli:
Document Length: 78557 bytes
Concurrency Level: 8
Time taken for tests: 0.149 seconds
Complete requests: 200
Failed requests: 0
Total transferred: 15747600 bytes
HTML transferred: 15711400 bytes
Requests per second: 1340.48 [#/sec] (mean)
Time per request: 5.968 [ms] (mean)
Time per request: 0.746 [ms] (mean, across all concurrent requests)
Transfer rate: 103073.16 [Kbytes/sec] received
It even outperforms next.js's static generated page (getStaticProps
), handling 2~2.5x requests per seconds in my environment.
next-boost
implements a server-side cache in the manner of stale-while-revalidate. When an expired (stale
) page is accessed, the cache will be served and at the same time, a background process will fetch the latest version (revalidate
) of that page and save it to the cache.
The following config will cache URIs matching ^/blog.*
. Only pages match rules
will be handled by next-boost
and there's no exclude
rules.
module.exports = {
rules: [{ regex: '^/blog.*', ttl: 300 }],
}
There are 2 parameters to control the behavior of the cache:
ttl (time-to-live)
: After ttl
seconds, the cache will be revalidated. And a cached page's ttl
will be updated when a page is revalidated.tbd (time-before-deletion)
: When a page is not hit again in ttl + tbd
seconds, it will be completely remove from cache.Above: only caching pages with URL start with /blog
.
By sending a GET with header x-next-boost:update
to the URL, the cache will be revalidated. And if the page doesn't exists anymore, the cache will be deleted.
$ curl -H x-next-boost:update https://the_server_name.com/path_a
If you want to delete mutiple pages at once, you can run SQL on the cache directly:
sqlite3 /cache_path/cache.db "update cache set ttl=0 where key like '%/url/a%';"
This will force all urls containing /url/a
to be revalidated when next time accessed.
Deleting cache_path
will remove all the caches.
By default, each page with different URLs will be cached separately. But in some cases you would like, /path_a?utm_source=twitter
to be served with the same contents of /path_a
. paramFilter
is for filtering the query parameters.
// in .next-boost.js
{
...
paramFilter: (p) => p !== 'utm_source'
}
By default, the URL will be used as the key for cached pages. If you want to server pages from different domains or by different user-agent, you can use this function to custom the cache key.
Notes:
string
, your server will crash.// in .next-boost.js
{
...
cacheKey: (req) => (req.headers.host || '') + ':' + req.url
}
Alternatively you can provide a function instead of array inside your config.
// in .next-boost.js
{
...
rules: (req) => {
if (req.url.startsWith('/blog')) {
return 300
}
}
}
Function should return valid ttl
for the request. If the function returns 0
or falsy
value the request will not be cached.
The power that comes from this method is that you can decide if the request is cached or not more dynamically.
For example you can automatically ignore all request from authenticated users based on the header:
// in .next-boost.js
{
...
rules: (req) => {
if (req.headers.authorization) {
return false
}
return 10 // cache all other requests for 10 seconds
}
}
You can also get more complex rules done more easily then through regex. For example you wish different ttl for each of the pagination pages.
// in .next-boost.js
{
...
rules: (req) => {
const [, p1] = url.split('?', 2)
const params = new URLSearchParams(p1)
return {
1: 5000,
2: 4000,
3: 3000,
4: 2000
}[params.get('page')] || 1000
}
}
While you would need to write complex regex rule or potentially more rules it is easy to do it through JS logic.
In the end if you prefer writting regex but wish to leverage JS logic you can always regex match inside a rules handler.
If available, .next-boost.js
at project root will be used. If you use next-boost programmatically, the filename can be changed in options you passed to CachedHandler
.
tips: If you are using next-boost
cli with Next.js, you may want to use the config file.
And here's an example .next-boost.sample.js
in the repo.
interface HandlerConfig {
filename?: string
quiet?: boolean
cache?: {
ttl?: number
tbd?: number
path?: string
}
rules?: Array<URLCacheRule> | URLCacheRuleResolver
paramFilter?: ParamFilter
cacheKey?: CacheKeyBuilder
}
interface URLCacheRule {
regex: string
ttl: number
}
type URLCacheRuleResolver = (req: IncomingMessage) => number
type ParamFilter = (param: string) => boolean
type CacheKeyBuilder = (req: IncomingMessage) => string
Logging is enabled by default. If you use next-boost
programmatically, you can disable logs by passing the quiet
boolean flag as an option to CachedHandler
.
...
const cached = await CachedHandler(args, { quiet: true });
...
There's also a --quiet
flag if you are using the command line.
next-boost
is limited. Until the url is hit on every backend server, it can still miss the cache. Use reverse proxy with cache support (nginx, varnish etc) for that.GET
and HEAD
requests only.worker_threads
is used and it is a node.js 12+ feature.next-boost
works as an in-place replacement for next start
by using Next.js's custom server feature.
On the linked page above, you can see the following notice:
Before deciding to use a custom server please keep in mind that it should only be used when the integrated router of Next.js can't meet your app requirements. A custom server will remove important performance optimizations, like serverless functions and Automatic Static Optimization.
next-boost is meant to be used on cloud VPS or containers, so serverless function is not an issue here. As to Automatic Static Optimization
, because we are not doing any app.render
here, it still works, as perfect as always.
Here's the article about when not to use SQLite. And for next-boost's main purpuse: super faster SSR on low-cost VPSs, as far as I know, it is the best choice.
Author: Next-boost
Source Code: https://github.com/next-boost/next-boost
License: MIT
1678450260
В этом кратком совете мы поговорим о том, что такое кэширование и как мы можем использовать его в PHP.
При разработке PHP-приложений крайне важно сосредоточиться на производительности . Веб-приложения могут иметь тысячи или даже миллионы пользователей, что может привести к снижению производительности и доступности. Кэширование в этом отношении бесценно, поскольку оно помогает избежать проблем с производительностью.
Кэширование — это способ хранения часто используемых данных во временном хранилище, чтобы сократить количество извлечений данных из исходного места хранения. Это может значительно повысить производительность веб-сайта или приложения, поскольку доступ к данным из кеша обычно намного быстрее, чем доступ к ним из источника.
PHP предоставляет несколько способов реализации кэширования. Давайте посмотрим на каждый из них.
Буферизация вывода — это метод в PHP, который позволяет нам сохранять выходные данные PHP-скрипта в буфере, а не отправлять их непосредственно в браузер. Это позволяет нам изменять вывод или выполнять над ним другие действия до того, как он будет показан пользователю.
Чтобы запустить выходной буфер, мы можем использовать ob_start()функцию. Эта функция включит буферизацию вывода и начнет захват всего вывода, отправленного сценарием. Затем вывод может быть сохранен в переменной с помощью ob_get_contents()функции. Наконец, буфер вывода может быть завершен, и вывод может быть отправлен в браузер с помощью функции ob_end_flush(), или он может быть отброшен с помощью ob_end_clean()функции.
Вот пример того, как работает буферизация вывода:
<?php
ob_start(); // Start the output buffer
echo 'This output will be stored in the buffer';
$output = ob_get_contents(); // Get the contents of the output buffer
ob_end_clean(); // End the output buffer and discard the contents
echo 'This output will be sent to the browser';
В этом конкретном примере строка 'This output will be sent to the browser'будет отображена только один раз, так как мы отбрасываем содержимое выходного буфера, содержащего первую инструкцию эха.
Буферизацию вывода можно использовать в качестве кеша, поскольку она позволяет нам хранить вывод PHP-скрипта в памяти, а не генерировать его каждый раз при доступе к скрипту.
Функции кэширования
PHP предоставляет несколько функций специально для кэширования данных, в том числе apc_store(), memcache_set()и xcache_set(). Эти функции можно использовать для хранения данных в памяти, доступ к которым осуществляется намного быстрее, чем к данным, хранящимся на жестком диске.
Эта apc_store()функция является частью расширения Alternative PHP Cache (APC), которое обеспечивает кеширование кода операции для PHP. (Кэш кода операции — это метод оптимизации производительности для PHP, который кэширует скомпилированный байт-код скриптов PHP в памяти, а не выполняет повторный анализ и повторную компиляцию исходного кода при каждом запросе.) Он сохраняет значение в кэше APC с указанным ключом. и срок годности.
Вот пример того, как использовать apc_store()функцию для кэширования значения в памяти:
<?php
$value = 'This is the value to cache';
// Store the value in cache for one hour
apc_store('cache_key', $value, 3600);
Чтобы получить кэшированное значение, мы можем использовать apc_fetch()функцию:
<?php
$cachedValue = apc_fetch('cache_key');
if ($cachedValue) {
// Use the cached value
echo $cachedValue;
} else {
// Generate the value and store it in cache
$value = 'This is the value to cache';
apc_store('cache_key', $value, 3600);
echo $value;
}
Более подробную информацию apc_store()можно найти здесь .
Функция memcache_set()является частью расширения Memcache, позволяющего использовать сервер Memcache в качестве кэша для PHP. Он сохраняет значение на сервере Memcache с указанным ключом и сроком действия.
Более подробную информацию memcache_set()можно найти здесь .
Эта xcache_set()функция является частью расширения XCache, которое предоставляет кэш кода операции PHP и кэш данных. Он сохраняет значение в кэше XCache с указанным ключом и сроком действия.
Более подробную информацию xcache_set()можно найти здесь .
Другим вариантом кэширования в PHP является использование базы данных для хранения кэшированных данных. Это может показаться нелогичным, поскольку основная цель кэширования — уменьшить количество обращений к базе данных и повысить производительность. Однако в некоторых случаях кэширование данных в базе данных может оказаться полезным.
Одним из таких случаев является необходимость кэшировать большие объемы данных, которые могут не поместиться в памяти. Кроме того, кэширование данных в базе данных может быть полезно, если вам необходимо получить доступ к кэшированным данным с нескольких серверов, поскольку это позволяет легко обмениваться кэшированными данными между серверами.
Для кэширования данных в базе данных вы можете использовать таблицу как минимум с двумя столбцами: один для ключа кэша и один для кэшированных данных. Затем вы можете использовать SELECTзапрос, чтобы проверить, существует ли ключ кэша в таблице, и запрос INSERTили UPDATEдля сохранения данных в таблице.
Вот пример того, как кэшировать данные в базе данных MySQL:
<?php
$db = new mysqli('localhost', 'username', 'password', 'database');
$cacheKey = 'cache_key';
$cachedValue = 'This is the value to cache';
// Check if the cache key exists in the table
$result = $db->query("SELECT * FROM cache WHERE cache_key = '$cacheKey'");
if ($result->num_rows > 0) {
// Update the cached value
$db->query("UPDATE cache SET value = '$cachedValue' WHERE cache_key = '$cacheKey'");
} else {
// Insert a new cache row
$db->query("INSERT INTO cache (cache_key, value) VALUES ('$cacheKey', '$cachedValue')");
}
// Retrieve the cached value
$result = $db->query("SELECT * FROM cache WHERE cache_key = '$cacheKey'");
$row = $result->fetch_assoc();
$cachedValue = $row['value'];
echo $cachedValue;
В этом примере показано, как проверить, существует ли ключ кэша в таблице кэша, и, если он существует, как обновить кэшированное значение. Если ключ кеша не существует, в таблицу вставляется новая строка с ключом и значением кеша. Кэшированное значение затем извлекается из таблицы и отображается пользователю.
Кэширование — очень мощный метод повышения производительности веб-сайта или приложения PHP. PHP предоставляет несколько вариантов реализации кэширования, включая буферизацию вывода, функции кэширования и кэширование с помощью базы данных. Сохраняя часто используемые данные во временном расположении, мы можем сократить количество извлечений данных из источника и повысить общую скорость и производительность сайта.
Оригинальный источник статьи: https://www.sitepoint.com/
1678446480
在这个快速提示中,我们将讨论缓存是什么以及我们如何在 PHP 中使用它。
在开发 PHP 应用程序时,关注性能至关重要。Web 应用程序可能拥有数千甚至数百万用户,这可能会导致性能下降和可用性问题。缓存在这方面是无价的,因为它可以帮助避免性能缺陷。
缓存是一种将频繁访问的数据存储在临时存储位置的方法,以减少需要从其原始存储位置检索数据的次数。这可以大大提高网站或应用程序的性能,因为从缓存访问数据通常比从源访问数据快得多。
PHP 提供了几种实现缓存的方法。让我们来看看它们中的每一个。
输出缓冲是 PHP 中的一项技术,它允许我们将 PHP 脚本的输出存储在缓冲区中,而不是直接将其发送到浏览器。这允许我们在向用户显示之前修改输出或对其执行其他操作。
要启动输出缓冲区,我们可以使用该ob_start()函数。此函数将打开输出缓冲并开始捕获脚本发送的所有输出。然后可以使用该函数将输出存储在变量中ob_get_contents()。最后,可以结束输出缓冲区并使用函数将输出发送到浏览器ob_end_flush(),也可以使用ob_end_clean()函数丢弃输出。
以下是输出缓冲如何工作的示例:
<?php
ob_start(); // Start the output buffer
echo 'This output will be stored in the buffer';
$output = ob_get_contents(); // Get the contents of the output buffer
ob_end_clean(); // End the output buffer and discard the contents
echo 'This output will be sent to the browser';
在此特定示例中,字符串'This output will be sent to the browser'将仅回显一次,因为我们丢弃了包含第一个回显指令的输出缓冲区的内容。
输出缓冲可以用作缓存,因为它允许我们将 PHP 脚本的输出存储在内存中,而不是每次访问脚本时都生成它。
缓存函数
PHP 提供了几个专门用于缓存数据的函数,包括apc_store()、memcache_set()和xcache_set()。这些函数可用于将数据存储在内存中,访问速度比存储在硬盘驱动器上的数据快得多。
该apc_store()函数是替代 PHP 缓存 (APC) 扩展的一部分,它为 PHP 提供操作码缓存。(Opcode缓存是PHP的一种性能优化技术,它将PHP脚本编译后的字节码缓存在内存中,而不是在每次请求时都重新解析和重新编译源代码。)它在APC缓存中存储一个指定key的值和过期时间。
apc_store()以下是如何使用该函数在内存中缓存值的示例:
<?php
$value = 'This is the value to cache';
// Store the value in cache for one hour
apc_store('cache_key', $value, 3600);
要检索缓存的值,我们可以使用以下apc_fetch()函数:
<?php
$cachedValue = apc_fetch('cache_key');
if ($cachedValue) {
// Use the cached value
echo $cachedValue;
} else {
// Generate the value and store it in cache
$value = 'This is the value to cache';
apc_store('cache_key', $value, 3600);
echo $value;
}
有关更多信息,apc_store()请参见此处。
该memcache_set()函数是 Memcache 扩展的一部分,它允许您使用 Memcache 服务器作为 PHP 的缓存。它使用指定的键和过期时间在 Memcache 服务器中存储一个值。
有关更多信息,memcache_set()请参见此处。
该xcache_set()函数是 XCache 扩展的一部分,它提供了 PHP 操作码缓存和数据缓存。它使用指定的键和过期时间在 XCache 缓存中存储一个值。
有关更多信息,xcache_set()请参见此处。
在 PHP 中缓存的另一种选择是使用数据库来存储缓存数据。这似乎违反直觉,因为缓存的主要目标是减少数据库访问次数并提高性能。但是,在某些情况下,在数据库中缓存数据可能会有用。
一种这样的情况是,如果您需要缓存可能不适合内存的大量数据。此外,如果您需要从多个服务器访问缓存数据,则在数据库中缓存数据会很有用,因为它允许在服务器之间轻松共享缓存数据。
要在数据库中缓存数据,您可以使用至少包含两列的表:一列用于缓存键,一列用于缓存数据。然后,您可以使用SELECT查询来检查表中是否存在缓存键,并使用INSERTorUPDATE查询将数据存储在表中。
以下是如何在 MySQL 数据库中缓存数据的示例:
<?php
$db = new mysqli('localhost', 'username', 'password', 'database');
$cacheKey = 'cache_key';
$cachedValue = 'This is the value to cache';
// Check if the cache key exists in the table
$result = $db->query("SELECT * FROM cache WHERE cache_key = '$cacheKey'");
if ($result->num_rows > 0) {
// Update the cached value
$db->query("UPDATE cache SET value = '$cachedValue' WHERE cache_key = '$cacheKey'");
} else {
// Insert a new cache row
$db->query("INSERT INTO cache (cache_key, value) VALUES ('$cacheKey', '$cachedValue')");
}
// Retrieve the cached value
$result = $db->query("SELECT * FROM cache WHERE cache_key = '$cacheKey'");
$row = $result->fetch_assoc();
$cachedValue = $row['value'];
echo $cachedValue;
此示例演示如何检查缓存表中是否存在缓存键,如果存在,如何更新缓存值。如果缓存键不存在,则使用缓存键和值将新行插入到表中。然后从表中检索缓存的值并显示给用户。
缓存是一种非常强大的技术,可以提高 PHP 网站或应用程序的性能。PHP 提供了几种实现缓存的选项,包括输出缓冲、缓存函数和使用数据库进行缓存。通过将经常访问的数据存储在一个临时位置,我们可以减少需要从其来源检索数据的次数,并提高站点的整体速度和性能。
文章原文出处:https: //www.sitepoint.com/
1678442700
In this quick tip, we’ll talk about what caching is and how we can use it in PHP.
It’s crucial to focus on performance when developing PHP apps. Web apps can have thousands or even millions of users, which can lead to slow performance and availability issues. Caching is invaluable in this respect, as it can help avoid performance pitfalls.
Caching is a way to store frequently accessed data in a temporary storage location to reduce the number of times the data needs to be retrieved from its original storage location. This can greatly improve the performance of a website or application, as accessing data from cache is generally much faster than accessing it from its source.
PHP provides several ways to implement caching. Let’s have a look at each of them.
Output buffering is a technique in PHP that allows us to store the output of a PHP script in a buffer, rather than sending it directly to the browser. This allows us to modify the output or perform other actions on it before it’s displayed to the user.
To start an output buffer, we can use the ob_start()
function. This function will turn output buffering on and begin capturing all output sent by the script. The output can then be stored in a variable using the ob_get_contents()
function. Finally, the output buffer can be ended and the output can be sent to the browser using the ob_end_flush()
function, or it can be discarded using the ob_end_clean()
function.
Here’s an example of how output buffering works:
<?php
ob_start(); // Start the output buffer
echo 'This output will be stored in the buffer';
$output = ob_get_contents(); // Get the contents of the output buffer
ob_end_clean(); // End the output buffer and discard the contents
echo 'This output will be sent to the browser';
In this particular example, the string 'This output will be sent to the browser'
will be echoed only once, since we’re discarding the contents of the output buffer that contains the first echo instruction.
Output buffering can be used as cache, as it allows us to store the output of a PHP script in memory, rather than generating it every time the script is accessed.
Caching Functions
PHP provides several functions specifically for caching data, including apc_store()
, memcache_set()
, and xcache_set()
. These functions can be used to store data in memory, which can be accessed much faster than data stored on a hard drive.
The apc_store()
function is part of the Alternative PHP Cache (APC) extension, which provides an opcode cache for PHP. (Opcode cache is a performance optimization technique for PHP that caches the compiled bytecode of PHP scripts in memory, rather than re-parsing and re-compiling the source code on each request.) It stores a value in the APC cache with a specified key and expiration time.
Here’s an example of how to use the apc_store()
function to cache a value in memory:
<?php
$value = 'This is the value to cache';
// Store the value in cache for one hour
apc_store('cache_key', $value, 3600);
To retrieve the cached value, we can use the apc_fetch()
function:
<?php
$cachedValue = apc_fetch('cache_key');
if ($cachedValue) {
// Use the cached value
echo $cachedValue;
} else {
// Generate the value and store it in cache
$value = 'This is the value to cache';
apc_store('cache_key', $value, 3600);
echo $value;
}
More information on apc_store()
can be found here.
The memcache_set()
function is part of the Memcache extension, which allows you to use a Memcache server as a cache for PHP. It stores a value in the Memcache server with a specified key and expiration time.
More information on memcache_set()
can be found here.
The xcache_set()
function is part of the XCache extension, which provides a PHP opcode cache and data cache. It stores a value in the XCache cache with a specified key and expiration time.
More information on xcache_set()
can be found here.
Another option for caching in PHP is to use a database to store cached data. This may seem counterintuitive, as the primary goal of caching is to reduce the number of database accesses and improve performance. However, there are some cases where caching data in a database might be useful.
One such case is if you need to cache large amounts of data that might not fit in memory. Additionally, caching data in a database can be useful if you need to access the cached data from multiple servers, as it allows for easy sharing of cached data between servers.
To cache data in a database, you can use a table with at least two columns: one for the cache key, and one for the cached data. You can then use a SELECT
query to check if the cache key exists in the table, and an INSERT
or UPDATE
query to store the data in the table.
Here’s an example of how to cache data in a MySQL database:
<?php
$db = new mysqli('localhost', 'username', 'password', 'database');
$cacheKey = 'cache_key';
$cachedValue = 'This is the value to cache';
// Check if the cache key exists in the table
$result = $db->query("SELECT * FROM cache WHERE cache_key = '$cacheKey'");
if ($result->num_rows > 0) {
// Update the cached value
$db->query("UPDATE cache SET value = '$cachedValue' WHERE cache_key = '$cacheKey'");
} else {
// Insert a new cache row
$db->query("INSERT INTO cache (cache_key, value) VALUES ('$cacheKey', '$cachedValue')");
}
// Retrieve the cached value
$result = $db->query("SELECT * FROM cache WHERE cache_key = '$cacheKey'");
$row = $result->fetch_assoc();
$cachedValue = $row['value'];
echo $cachedValue;
This example demonstrates how to check if a cache key exists in the cache table, and if it does, how to update the cached value. If the cache key doesn’t exist, a new row is inserted into the table with the cache key and value. The cached value is then retrieved from the table and displayed to the user.
Caching is a very powerful technique for improving the performance of a PHP website or application. PHP provides several options for implementing caching, including output buffering, caching functions, and caching with a database. By storing frequently accessed data in a temporary location, we can reduce the number of times the data needs to be retrieved from its source and improve the overall speed and performance of a site.
Original article source at: https://www.sitepoint.com/
1677688103
Usage example with PDFView to display cached pdf files.
import 'package:flutter/material.dart';
import 'package:flutter_pdfview/flutter_pdfview.dart';
import 'package:network_file_cached/network_file_cached.dart';
void main() async {
await NetworkFileCached.init(
expired: const Duration(minutes: 5),
);
runApp(
const MaterialApp(
home: FileCache(),
),
);
}
class FileCache extends StatefulWidget {
const FileCache({super.key});
@override
State<FileCache> createState() => _FileCacheState();
}
class _FileCacheState extends State<FileCache> {
String uri = 'https://s.id/1zvyC';
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Plugin example apps'),
),
body: FutureBuilder(
future: NetworkFileCached.downloadFile(uri),
builder: (context, snapshoot) {
if (snapshoot.hasData) {
return PDFView(
pdfData: snapshoot.data?.readAsBytesSync(),
);
}
if (snapshoot.hasError) {
return Text(snapshoot.error.toString());
}
return const Text('LOADING...');
}));
}
}
Run this command:
With Flutter:
$ flutter pub add network_file_cached
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
network_file_cached: ^0.0.1
Alternatively, your editor might support flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:network_file_cached/network_file_cached.dart';
import 'package:flutter/material.dart';
import 'package:flutter_pdfview/flutter_pdfview.dart';
import 'package:network_file_cached/network_file_cached.dart';
void main() async {
await NetworkFileCached.init(
expired: const Duration(minutes: 5),
);
runApp(
const MaterialApp(
home: FileCache(),
),
);
}
class FileCache extends StatefulWidget {
const FileCache({super.key});
@override
State<FileCache> createState() => _FileCacheState();
}
class _FileCacheState extends State<FileCache> {
String uri = 'https://s.id/1zvyC';
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Plugin example apps'),
),
body: FutureBuilder(
future: NetworkFileCached.downloadFile(uri),
builder: (context, snapshoot) {
if (snapshoot.hasData) {
return PDFView(
pdfData: snapshoot.data?.readAsBytesSync(),
);
}
if (snapshoot.hasError) {
return Text(snapshoot.error.toString());
}
return const Text('LOADING...');
}));
}
}
Download Details:
Author: jabardigitalservice
Source Code: https://github.com/jabardigitalservice/flutter-packages/tree/main/network_file_cached
1677585786
A simple but flexible cache, written in Swift for
iOS 13+
andWatchOS 6
apps.
Breaking Changes
Carlos 1.0.0 has been migrated from PiedPiper dependency to Combine hence the minimum supported platforms versions are equal to the Combine's minimum supported platforms versions. See the releases page for more information.
Carlos
is a small set of classes and functions to realize custom, flexible and powerful cache layers in your application.
With a Functional Programming vocabulary, Carlos makes for a monoidal cache system. You can check the best explanation of how that is realized here or in this video, thanks to @bkase for the slides.
By default, Carlos
ships with an in-memory cache, a disk cache, a simple network fetcher and a NSUserDefaults
cache (the disk cache is inspired by HanekeSwift).
With Carlos
you can:
Carlos
Carlos
can take care of that for youAdd Carlos
to your project through the Xcode or add the following line to your package dependencies:
.package("https://github.com/spring-media/Carlos", from: "1.0.0")
Carlos
is available through CocoaPods. To install it, simply add the following line to your Podfile:
pod "Carlos", :git => "https://github.com/spring-media/Carlos"
Carthage
is also supported.
To run the example project, clone the repo.
let cache = MemoryCacheLevel<String, NSData>().compose(DiskCacheLevel())
This line will generate a cache that takes String
keys and returns NSData
values. Setting a value for a given key on this cache will set it for both the levels. Getting a value for a given key on this cache will first try getting it on the memory level, and if it cannot find one, will ask the disk level. In case both levels don't have a value, the request will fail. In case the disk level can fetch a value, this will also be set on the memory level so that the next fetch will be faster.
Carlos
comes with a CacheProvider
class so that standard caches are easily accessible.
CacheProvider.dataCache()
to create a cache that takes URL
keys and returns NSData
valuesCacheProvider.imageCache()
to create a cache that takes URL
keys and returns UIImage
valuesCacheProvider.JSONCache()
to create a cache that takes URL
keys and returns AnyObject
values (that should be then safely casted to arrays or dictionaries depending on your application)The above methods always create new instances (so calling CacheProvider.imageCache()
twice doesn't return the same instance, even though the disk level will be effectively shared because it will use the same folder on disk, but this is a side-effect and should not be relied upon) and you should take care of retaining the result in your application layer. If you want to always get the same instance, you can use the following accessors instead:
CacheProvider.sharedDataCache
to retrieve a shared instance of a data cacheCacheProvider.sharedImageCache
to retrieve a shared instance of an image cacheCacheProvider.sharedJSONCache
to retrieve a shared instance of a JSON cacheTo fetch a value from a cache, use the get
method.
cache.get("key")
.sink(
receiveCompletion: { completion in
if case let .failure(error) = completion {
print("An error occurred :( \(error)")
}
},
receiveValue: { value in
print("I found \(value)!")
}
)
A request can also be canceled with the cancel()
method, and you can be notified of this event by calling onCancel
on a given request:
let cancellable = cache.get(key)
.handleEvents(receiveCancel: {
print("Looks like somebody canceled this request!")
})
.sink(...)
[... somewhere else]
cancellable.cancel()
This cache is not very useful, though. It will never actively fetch values, just store them for later use. Let's try to make it more interesting:
let cache = MemoryCacheLevel()
.compose(DiskCacheLevel())
.compose(NetworkFetcher())
This will create a cache level that takes URL
keys and stores NSData
values (the type is inferred from the NetworkFetcher
hard-requirement of URL
keys and NSData
values, while MemoryCacheLevel
and DiskCacheLevel
are much more flexible as described later).
Key transformations are meant to make it possible to plug cache levels in whatever cache you're building.
Let's see how they work:
// Define your custom ErrorType values
enum URLTransformationError: Error {
case invalidURLString
}
let transformedCache = NetworkFetcher().transformKeys(
OneWayTransformationBox(
transform: {
Future { promise in
let url = URL(string: $0) {
promise(.success(url))
} else {
promise(.failure(URLTransformationError.invalidURLString))
}
}
}
)
)
With the line above, we're saying that all the keys coming into the NetworkFetcher level have to be transformed to URL
values first. We can now plug this cache into a previously defined cache level that takes String
keys:
let cache = MemoryCacheLevel<String, NSData>().compose(transformedCache)
If this doesn't look very safe (one could always pass string garbage as a key and it won't magically translate to a URL
, thus causing the NetworkFetcher
to silently fail), we can still use a domain specific structure as a key, assuming it contains both String
and URL
values:
struct Image {
let identifier: String
let URL: Foundation.URL
}
let imageToString = OneWayTransformationBox(transform: { (image: Image) -> AnyPublisher<String, String> in
Just(image.identifier).eraseToAnyPublisher()
})
let imageToURL = OneWayTransformationBox(transform: { (image: Image) -> AnyPublisher<URL> in
Just(image.URL).eraseToAnyPublisher()
})
let memoryLevel = MemoryCacheLevel<String, NSData>().transformKeys(imageToString)
let diskLevel = DiskCacheLevel<String, NSData>().transformKeys(imageToString)
let networkLevel = NetworkFetcher().transformKeys(imageToURL)
let cache = memoryLevel.compose(diskLevel).compose(networkLevel)
Now we can perform safe requests like this:
let image = Image(identifier: "550e8400-e29b-41d4-a716-446655440000", URL: URL(string: "http://goo.gl/KcGz8T")!)
cache.get(image).sink {
print("Found \(value)!")
}
Since Carlos 0.5
you can also apply conditions to OneWayTransformers
used for key transformations. Just call the conditioned
function on the transformer and pass your condition. The condition can also be asynchronous and has to return a AnyPublisher<Bool, Error>
, having the chance to return a specific error for the failure of the transformation.
let transformer = OneWayTransformationBox<String, URL>(transform: { key in
Future { promise in
if let value = URL(string: key) {
promise(.success(value))
} else {
promise(.failure(MyError.stringIsNotURL))
}
}.eraseToAnyPublisher()
}).conditioned { key in
Just(key)
.filter { $0.rangeOfString("http") != nil }
.eraseToAnyPublisher()
}
let cache = CacheProvider.imageCache().transformKeys(transformer)
That's not all, though.
What if our disk cache only stores Data
, but we want our memory cache to conveniently store UIImage
instances instead?
Value transformers let you have a cache that (let's say) stores Data
and mutate it to a cache that stores UIImage
values. Let's see how:
let dataTransformer = TwoWayTransformationBox(transform: { (image: UIImage) -> AnyPublisher<Data, Error> in
Just(UIImagePNGRepresentation(image)).eraseToAnyPublisher()
}, inverseTransform: { (data: Data) -> AnyPublisher<UIImage, Error> in
Just(UIImage(data: data)!).eraseToAnyPublisher()
})
let memoryLevel = MemoryCacheLevel<String, UIImage>().transformKeys(imageToString).transformValues(dataTransformer)
This memory level can now replace the one we had before, with the difference that it will internally store UIImage
values!
Keep in mind that, as with key transformations, if your transformation closure fails (either the forward transformation or the inverse transformation), the cache level will be skipped, as if the fetch would fail. Same considerations apply for set
calls.
Carlos
comes with some value transformers out of the box, for example:
JSONTransformer
to serialize NSData
instances into JSONImageTransformer
to serialize NSData
instances into UIImage
values (not available on the Mac OS X framework)StringTransformer
to serialize NSData
instances into String
values with a given encodingDateFormatter
, NumberFormatter
, MKDistanceFormatter
) so that you can use customized instances depending on your needs.As of Carlos 0.4
, it's possible to transform values coming out of Fetcher
instances with just a OneWayTransformer
(as opposed to the required TwoWayTransformer
for normal CacheLevel
instancess. This is because the Fetcher
protocol doesn't require set
). This means you can easily chain Fetcher
s that get a JSON from the internet and transform their output to a model object (for example a struct
) into a complex cache pipeline without having to create a dummy inverse transformation just to satisfy the requirements of the TwoWayTransformer
protocol.
As of Carlos 0.5
, all transformers natively support asynchronous computation, so you can have expensive transformations in your custom transformers without blocking other operations. In fact, the ImageTransformer
that comes out of the box processes image transformations on a background queue.
As of Carlos 0.5
you can also apply conditions to TwoWayTransformers
used for value transformations. Just call the conditioned
function on the transformer and pass your conditions (one for the forward transformation, one for the inverse transformation). The conditions can also be asynchronous and have to return a AnyPublisher<Bool, Error>
, having the chance to return a specific error for the failure of the transformation.
let transformer = JSONTransformer().conditioned({ input in
Just(myCondition).eraseToAnyPublisher()
}, inverseCondition: { input in
Just(myCondition)eraseToAnyPublisher()
})
let cache = CacheProvider.dataCache().transformValues(transformer)
In some cases your cache level could return the right value, but in a sub-optimal format. For example, you would like to sanitize the output you're getting from the Cache as a whole, independently of the exact layer that returned it.
For these cases, the postProcess
function introduced with Carlos 0.4
could come helpful. The function is available as a protocol extension of the CacheLevel
protocol.
The postProcess
function takes a CacheLevel
and a OneWayTransformer
with TypeIn == TypeOut
as parameters and outputs a decorated BasicCache
with the post-processing step embedded in.
// Let's create a simple "to uppercase" transformer
let transformer = OneWayTransformationBox<NSString, String>(transform: { Just($0.uppercased() as String).eraseToAnyPublisher() })
// Our memory cache
let memoryCache = MemoryCacheLevel<String, NSString>()
// Our decorated cache
let transformedCache = memoryCache.postProcess(transformer)
// Lowercase value set on the memory layer
memoryCache.set("test String", forKey: "key")
// We get the lowercase value from the undecorated memory layer
memoryCache.get("key").sink { value in
let x = value
}
// We get the uppercase value from the decorated cache, though
transformedCache.get("key").sink { value in
let x = value
}
Since Carlos 0.5
you can also apply conditions to OneWayTransformers
used for post processing transformations. Just call the conditioned
function on the transformer and pass your condition. The condition can also be asynchronous and has to return a AnyPublisher<Bool, Error>
, having the chance to return a specific error for the failure of the transformation. Keep in mind that the condition will actually take the output of the cache as the input, not the key used to fetch this value! If you want to apply conditions based on the key, use conditionedPostProcess
instead, but keep in mind this doesn't support using OneWayTransformer
instances yet.
let processer = OneWayTransformationBox<NSData, NSData>(transform: { value in
Future { promise in
if let value = String(data: value as Data, encoding: .utf8)?.uppercased().data(using: .utf8) as NSData? {
promise(.success(value))
} else {
promise(.failure(FetchError.conditionNotSatisfied))
}
}
}).conditioned { value in
Just(value.length < 1000).eraseToAnyPublisher()
}
let cache = CacheProvider.dataCache().postProcess(processer)
Extending the case for simple output post-processing, you can also apply conditional transformations based on the key used to fetch the value.
For these cases, the conditionedPostProcess
function introduced with Carlos 0.6
could come helpful. The function is available as a protocol extension of the CacheLevel
protocol.
The conditionedPostProcess
function takes a CacheLevel
and a conditioned transformer conforming to ConditionedOneWayTransformer
as parameters and outputs a decorated CacheLevel
with the conditional post-processing step embedded in.
// Our memory cache
let memoryCache = MemoryCacheLevel<String, NSString>()
// Our decorated cache
let transformedCache = memoryCache.conditionedPostProcess(ConditionedOneWayTransformationBox(conditionalTransformClosure: { (key, value) in
if key == "some sentinel value" {
return Just(value.uppercased()).eraseToAnyPublisher()
} else {
return Just(value).eraseToAnyPublisher()
}
})
// Lowercase value set on the memory layer
memoryCache.set("test String", forKey: "some sentinel value")
// We get the lowercase value from the undecorated memory layer
memoryCache.get("some sentinel value").sink { value in
let x = value
}
// We get the uppercase value from the decorated cache, though
transformedCache.get("some sentinel value").sink { value in
let x = value
}
Extending the case for simple value transformation, you can also apply conditional transformations based on the key used to fetch or set the value.
For these cases, the conditionedValueTransformation
function introduced with Carlos 0.6
could come helpful. The function is available as a protocol extension of the CacheLevel
protocol.
The conditionedValueTransformation
function takes a CacheLevel
and a conditioned transformer conforming to ConditionedTwoWayTransformer
as parameters and outputs a decorated CacheLevel
with a modified OutputType
(equal to the transformer's TypeOut
, as in the normal value transformation case) with the conditional value transformation step embedded in.
// Our memory cache
let memoryCache = MemoryCacheLevel<String, NSString>()
// Our decorated cache
let transformedCache = memoryCache.conditionedValueTransformation(ConditionedTwoWayTransformationBox(conditionalTransformClosure: { (key, value) in
if key == "some sentinel value" {
return Just(1).eraseToAnyPublisher()
} else {
return Just(0).eraseToAnyPublisher()
}
}, conditionalInverseTransformClosure: { (key, value) in
if key > 0 {
return Just("Positive").eraseToAnyPublisher()
} else {
return Just("Null or negative").eraseToAnyPublisher()
}
})
// Value set on the memory layer
memoryCache.set("test String", forKey: "some sentinel value")
// We get the same value from the undecorated memory layer
memoryCache.get("some sentinel value").sink { value in
let x = value
}
// We get 1 from the decorated cache, though
transformedCache.get("some sentinel value").sink { value in
let x = value
}
// We set "Positive" on the decorated cache
transformedCache.set(5, forKey: "test")
As of Carlos 0.4
, it's possible to compose multiple OneWayTransformer
objects. This way, one can create several transformer modules to build a small library and then combine them as more convenient depending on the application.
You can compose the transformers in the same way you do with normal CacheLevel
s: with the compose
protocol extension:
let firstTransformer = ImageTransformer() // NSData -> UIImage
let secondTransformer = ImageTransformer().invert() // Trivial UIImage -> NSData
let identityTransformer = firstTransformer.compose(secondTransformer)
The same approach can be applied to TwoWayTransformer
objects (that by the way are already OneWayTransformer
as well).
Many transformer modules will be provided by default with Carlos
.
When you have a working cache, but some of your levels are expensive (say a Network fetcher or a database fetcher), you may want to pool requests in a way that multiple requests for the same key, coming together before one of them completes, are grouped so that when one completes all of the other complete as well without having to actually perform the expensive operation multiple times.
This functionality comes with Carlos
.
let cache = (memoryLevel.compose(diskLevel).compose(networkLevel)).pooled()
Keep in mind that the key must conform to the Hashable
protocol for the pooled
function to work:
extension Image: Hashable {
var hashValue: Int {
return identifier.hashValue
}
}
extension Image: Equatable {}
func ==(lhs: Image, rhs: Image) -> Bool {
return lhs.identifier == rhs.identifier && lhs.URL == rhs.URL
}
Now we can execute multiple fetches for the same Image
value and be sure that only one network request will be started.
Since Carlos 0.7
you can pass a list of keys to your CacheLevel
through batchGetSome
. This returns a AnyPublisher
that succeeds when all the requests for the specified keys complete, not necessarily succeeding. You will only get the successful values in the success callback, though.
Since Carlos 0.9
you can transform your CacheLevel
into one that takes a list of keys through allBatch
. Calling get
on such a CacheLevel
returns a AnyPublisher
that succeeds only when the requests for all of the specified keys succeed, and fails as soon as one of the requests for the specified keys fails. If you cancel the AnyPublisher
returned by this CacheLevel
, all of the pending requests are canceled, too.
An example of the usage:
let cache = MemoryCacheLevel<String, Int>()
for iter in 0..<99 {
cache.set(iter, forKey: "key_\(iter)")
}
let keysToBatch = (0..<100).map { "key_\($0)" }
cache.batchGetSome(keysToBatch).sink(
receiveCompletion: { completion in
print("Failed because \($0)")
},
receiveValue: { values in
print("Got \(values.count) values in total")
}
)
In this case the allBatch().get
call would fail because there are only 99 keys set and the last request will make the whole batch fail, with a valueNotInCache
error. The batchGetSome().get
will succeed instead, printing Got 99 values in total
.
Since allBatch
returns a new CacheLevel
instance, it can be composed or transformed just like any other cache:
In this case cache
is a cache that takes a sequence of String
keys and returns a AnyPublisher
of a list of Int
values, but is limited to 3 concurrent requests (see the next paragraph for more information on limiting concurrent requests).
Sometimes we may have levels that should only be queried under some conditions. Let's say we have a DatabaseLevel
that should only be triggered when users enable a given setting in the app that actually starts storing data in the database. We may want to avoid accessing the database if the setting is disabled in the first place.
let conditionedCache = cache.conditioned { key in
Just(appSettingIsEnabled).eraseToAnyPublisher()
}
The closure gets the key the cache was asked to fetch and has to return a AnyPublisher<Bool, Error>
object indicating whether the request can proceed or should skip the level, with the possibility to fail with a specific Error
to communicate the error to the caller.
At runtime, if the variable appSettingIsEnabled
is false
, the get
request will skip the level (or fail if this was the only or last level in the cache). If true
, the get
request will be executed.
If you have a complex scenario where, depending on the key or some other external condition, either one or another cache should be used, then the switchLevels
function could turn useful.
Usage:
let lane1 = MemoryCacheLevel<URL, NSData>() // The two lanes have to be equivalent (same key type, same value type).
let lane2 = CacheProvider.dataCache() // Keep in mind that you can always use key transformation or value transformations if two lanes don't match by default
let switched = switchLevels(lane1, lane2) { key in
if key.scheme == "http" {
return .cacheA
} else {
return .cacheB // The example is just meant to show how to return different lanes
}
}
Now depending on the scheme of the key URL, either the first lane or the second will be used.
If we store big objects in memory in our cache levels, we may want to be notified of memory warning events. This is where the listenToMemoryWarnings
and unsubscribeToMemoryWarnings
functions come handy:
let token = cache.listenToMemoryWarnings()
and later
unsubscribeToMemoryWarnings(token)
With the first call, the cache level and all its composing levels will get a call to onMemoryWarning
when a memory warning comes.
With the second call, the behavior will stop.
Keep in mind that this functionality is not yet supported by the WatchOS 2 framework CarlosWatch.framework
.
In case you need to store the result of multiple Carlos
composition calls in a property, it may be troublesome to set the type of the property to BasicCache
as some calls return different types (e.g. PoolCache
). In this case, you can normalize
the cache level before assigning it to the property and it will be converted to a BasicCache
value.
import Carlos
class CacheManager {
let cache: BasicCache<URL, NSData>
init(injectedCache: BasicCache<URL, NSData>) {
self.cache = injectedCache
}
}
[...]
let manager = CacheManager(injectedCache: CacheProvider.dataCache().pooled()) // This won't compile
let manager = CacheManager(injectedCache: CacheProvider.dataCache().pooled().normalize()) // This will
As a tip, always use normalize
if you need to assign the result of multiple composition calls to a property. The call is a no-op if the value is already a BasicCache
, so there will be no performance loss in that case.
Creating custom levels is easy and encouraged (after all, there are multiple cache libraries already available if you only need memory, disk and network functionalities!).
Let's see how to do it:
class MyLevel: CacheLevel {
typealias KeyType = Int
typealias OutputType = Float
func get(_ key: KeyType) -> AnyPublisher<OutputType, Error> {
Future {
// Perform the fetch and either succeed or fail
}.eraseToAnyPublisher()
}
func set(_ value: OutputType, forKey key: KeyType) -> AnyPublisher<Void, Error> {
Future {
// Store the value (db, memory, file, etc) and call this on completion:
}.eraseToAnyPublisher()
}
func clear() {
// Clear the stored values
}
func onMemoryWarning() {
// A memory warning event came. React appropriately
}
}
The above class conforms to the CacheLevel
protocol. First thing we need is to declare what key types we accept and what output types we return. In this example case, we have Int
keys and Float
output values.
The required methods to implement are 4: get
, set
, clear
and onMemoryWarning
. This sample cache can now be pipelined to a list of other caches, transforming its keys or values if needed as we saw in the earlier paragraphs.
With Carlos 0.4
, the Fetcher
protocol was introduced to make it easier for users of the library to create custom fetchers that can be used as read-only levels in the cache. An example of a "Fetcher
in disguise" that has always been included in Carlos
is NetworkFetcher
: you can only use it to read from the network, not to write (set
, clear
and onMemoryWarning
were no-ops).
This is how easy it is now to implement your custom fetcher:
class CustomFetcher: Fetcher {
typealias KeyType = String
typealias OutputType = String
func get(_ key: KeyType) -> Anypublisher<OutputType, Error> {
return Just("Found an hardcoded value :)").eraseToAnyPublisher()
}
}
You still need to declare what KeyType
and OutputType
your CacheLevel
deals with, of course, but then you're only required to implement get
. Less boilerplate for you!
Carlos
comes with 3 cache levels out of the box:
MemoryCacheLevel
DiskCacheLevel
NetworkFetcher
0.5
release, a UserDefaultsCacheLevel
MemoryCacheLevel is a volatile cache that internally stores its values in an NSCache
instance. The capacity can be specified through the initializer, and it supports clearing under memory pressure (if the level is subscribed to memory warning notifications). It accepts keys of any given type that conforms to the StringConvertible
protocol and can store values of any given type that conforms to the ExpensiveObject
protocol. Data
, NSData
, String
, NSString
UIImage
, URL
already conform to the latter protocol out of the box, while String
, NSString
and URL
conform to the StringConvertible
protocol. This cache level is thread-safe.
DiskCacheLevel is a persistent cache that asynchronously stores its values on disk. The capacity can be specified through the initializer, so that the disk size will never get too big. It accepts keys of any given type that conforms to the StringConvertible
protocol and can store values of any given type that conforms to the NSCoding
protocol. This cache level is thread-safe, and currently the only CacheLevel
that can fail when calling set
, with a DiskCacheLevelError.diskArchiveWriteFailed
error.
NetworkFetcher is a cache level that asynchronously fetches values over the network. It accepts URL
keys and returns NSData
values. This cache level is thread-safe.
NSUserDefaultsCacheLevel is a persistent cache that stores its values on a UserDefaults
persistent domain with a specific name. It accepts keys of any given type that conforms to the StringConvertible
protocol and can store values of any given type that conforms to the NSCoding
protocol. It has an internal soft cache used to avoid hitting the persistent storage too often, and can be cleared without affecting other values saved on the standardUserDefaults
or on other persistent domains. This cache level is thread-safe.
When we decided how to handle logging in Carlos, we went for the most flexible approach that didn't require us to code a complete logging framework, that is the ability to plug-in your own logging library. If you want the output of Carlos to only be printed if exceeding a given level, if you want to completely silent it for release builds, or if you want to route it to a file, or whatever else: just assign your logging handling closure to Carlos.Logger.output
:
Carlos.Logger.output = { message, level in
myLibrary.log(message) //Plug here your logging library
}
Carlos
is thouroughly tested so that the features it's designed to provide are safe for refactoring and as much as possible bug-free.
We use Quick and Nimble instead of XCTest
in order to have a good BDD test layout.
As of today, there are around 1000 tests for Carlos
(see the folder Tests
), and overall the tests codebase is double the size of the production codebase.
Carlos
is under development and here you can see all the open issues. They are assigned to milestones so that you can have an idea of when a given feature will be shipped.
If you want to contribute to this repo, please:
Using Carlos? Please let us know through a Pull request, we'll be happy to mention your app!
Vittorio Monaco, vittorio.monaco@weltn24.de, @vittoriom on Github, @Vittorio_Monaco on Twitter
Esad Hajdarevic, @esad
Carlos
internally uses:
The DiskCacheLevel class is inspired by Haneke. The source code has been heavily modified, but adapting the original file has proven valuable for Carlos
development.
Author: Spring-media
Source Code: https://github.com/spring-media/Carlos
License: MIT license
1676342160
Sometimes our application frequently calls the same method and fetches the data from the database. The output of these requests is the same at all times. It doesn't get changed or updated in the database. In this case, we can use caching to reduce the database calls and retrieve the data directly from the cache memory.
There are 3 types of cache available,
In-Memory cache means storing the cache data on the server's memory.
It is easier and quicker than other caching mechanisms
It is suited for small and middle applications.
If the cache is not configured properly then, it can consume the server’s resources.
Scalability Issues. It is suitable for a single server. If we have many servers then, can't share the cache with all servers.
Original article source at: https://www.c-sharpcorner.com/
1675909210
Learn how to clear your browser's cache. This tutorial will show 8 ways to clear the cache on all popular browsers, including Chrome, Firefox, Edge, and Safari.
While your browser cache usually helps websites load faster, it can sometimes prevent you from seeing the most up-to-date version of a webpage. In some cases, an old or corrupted cache can even cause a webpage to load improperly or prevent it from loading at all! Fortunately, it's easy to clear your web cache on any platform, whether you're using a computer, phone, or tablet. This tutorial will teach you the easiest ways to clear the cache on all popular browsers, including Chrome, Firefox, Edge, and Safari.
1
Open Google Chrome
. Its app icon resembles a red, yellow, green, and blue sphere.
2
Click ⋮. It's in the top-right corner of the screen. A drop-down menu will appear.
3
Select More tools. This option is near the bottom of the drop-down menu. Selecting it prompts a pop-out menu to appear.
4
Click Clear browsing data…. It's in the pop-out menu. Doing so opens a window with data-clearing options.
5
Select a time range. Click the "Time range" box, then click All time in the drop-down menu to ensure that all cached images and files are cleared.
6
Check the "Cached images and files" box. It's in the middle of the window.
7
Click CLEAR DATA. This blue button is in the bottom-right corner of the window. Doing so clears Google Chrome's cache.
1
Open Google Chrome
. Tap the Chrome app icon, which resembles a red, yellow, green, and blue sphere icon.
2
Tap ⋮. It's in the top-right corner of the screen. A drop-down menu will appear.
3
Tap History. This option is in the drop-down menu.
4
Tap Clear Browsing Data…. It's in the lower-left corner of the screen.
5
Tap Cached Images and Files to check it. You should see a blue checkmark appear next to it.
6
Tap Clear Browsing Data. It's at the bottom of the screen.
7
Tap Clear Browsing Data when prompted. Doing so will clear the cache for Chrome.
1
Open Firefox. Its app icon resembles an orange fox wrapped around a blue globe.
2
Click ☰. It's the three horizontal lines in the top-right corner of the window. A drop-down menu will appear.
3
Click Options. It's the option with a gear icon.
4
Click Privacy & Security. It's in the left panel.
5
Click the Clear History button. It's under the "History" header in the right panel.
6
Select a time range. Click the "Time range to clear" drop-down box, then click Everything in the drop-down menu.
7
Choose what to delete. You'll definitely want to check the "Cache" checkbox, but everything else is optional.
Click OK. It's at the bottom of the window. Doing so will clear your Firefox browser's cache.
1
Open Firefox. Tap the Firefox app icon, which resembles an orange fox wrapped around a blue globe.
2
Tap the three-dot menu. It's at the bottom-right corner of the screen. A menu will expand.
3
Tap Settings on the menu.
4
Tap Delete browsing data. It's under the "Privacy and security" header.
5
Choose what to delete. To delete just the cache, check the box next to "Cached images and files" and remove the other checkmarks.
6
Tap Delete browsing data. A confirmation message will appear.
7
Tap Delete to confirm. Your cache is now removed.
1
Open Firefox. Tap the Firefox app icon, which resembles an orange fox wrapped around a blue globe.
2
Tap the menu ☰. It's the three horizontal lines at the bottom-right corner.
3
Tap Settings. It's at the bottom of the menu.
4
Tap Data Management. It's under the "Privacy" header.
5
Choose what to delete. If you just want to delete the cache, make sure the "Cache" switch is blue and the others are white or gray.
6
Tap Clear Private Data. It's at the bottom of the screen.
7
Tap OK when prompted. Doing so will clear the cached files from your Firefox browser.
1
Open Microsoft Edge. It's the blue-and-green "e" icon in the Start menu.
2
Press Control+⇧ Shift+Delete. This brings up the "Clear browsing data" window.
3
Choose a length of time. Select how much data to clear from the "Time range" menu at the top.
4
Choose what to clear. If you just want to clear the cache, check the box next to "Cached images and files" and remove the other checkmarks.
5
Click the blue Clear now button. Doing so will clear your Edge cache.
1
Open Safari. It's the blue compass icon on the Dock at the bottom of your desktop.
2
Enable the Develop menu. If you already see a "Develop" menu in the menu bar at the top of the screen you can skip this step. Otherwise, here's how to enable this menu.[1]
3
Open the Develop menu. Now that you've enabled it, it's in the menu bar at the top of the screen.
4
Click Empty Caches. This deletes your cache from your Mac.
1
Open your iPhone's Settings
. Tap the grey app with gears on it. This opens your iPhone's Settings page.
2
Scroll down and tap Safari. It's about a third of the way down the Settings page.
3
Scroll down and tap Clear History and Website Data. You'll find this near the bottom of the Safari page.
4
Tap Clear History and Data when prompted. Doing so will clear all of your iPhone's Safari data, including the cached files and pages.
Original article source at https://www.wikihow.com
#cache #browser
1672326480
amphp/redis
provides non-blocking access to Redis instances. All I/O operations are handled by the Amp concurrency framework, so you should be familiar with the basics of it.
This package can be installed as a Composer dependency.
composer require amphp/redis
<?php
require __DIR__ . '/vendor/autoload.php';
use Amp\Redis\Config;
use Amp\Redis\Redis;
use Amp\Redis\RemoteExecutor;
Amp\Loop::run(static function () {
$redis = new Redis(new RemoteExecutor(Config::fromUri('redis://')));
yield $redis->set('foo', '21');
$result = yield $redis->increment('foo', 21);
\var_dump($result); // int(42)
});
If you discover any security related issues, please email me@kelunik.com
instead of using the issue tracker.
Author: amphp
Source Code: https://github.com/amphp/redis
License: MIT license
1668847800
In an earlier article, we looked at an overview of caching in Django and took a dive into how to cache a Django view along with using different cache backends. This article looks closer at the low-level cache API in Django.
By the end of this article, you should be able to:
Caching in Django can be implemented on different levels (or parts of the site). You can cache the entire site or specific parts with various levels of granularity (listed in descending order of granularity):
For more on the different caching levels in Django, refer to the Caching in Django article.
If Django's per-site or per-view cache aren't granular enough for your application's needs, then you may want to leverage the low-level cache API to manage caching at the object level.
You may want to use the low-level cache API if you need to cache different:
So, Django's low-level cache is good when you need more granularity and control over the cache. It can store any object that can be pickled safely. To use the low-level cache, you can use either the built-in django.core.cache.caches
or, if you just want to use the default cache defined in the settings.py file, via django.core.cache.cache
.
Clone down the base project from the django-low-level-cache repo on GitHub:
$ git clone -b base https://github.com/testdrivenio/django-low-level-cache
$ cd django-low-level-cache
Create (and activate) a virtual environment and install the requirements:
$ python3.9 -m venv venv
$ source venv/bin/activate
(venv)$ pip install -r requirements.txt
Apply the Django migrations, load some product data into the database, and the start the server:
(venv)$ python manage.py migrate
(venv)$ python manage.py seed_db
(venv)$ python manage.py runserver
Navigate to http://127.0.0.1:8000 in your browser to check that everything works as expected.
We'll be using Redis for the cache backend.
Download and install Redis.
If you’re on a Mac, we recommend installing Redis with Homebrew:
$ brew install redis
Once installed, in a new terminal window start the Redis server and make sure that it's running on its default port, 6379. The port number will be important when we tell Django how to communicate with Redis.
$ redis-server
For Django to use Redis as a cache backend, the django-redis dependency is required. It's already been installed, so you just need to add the custom backend to the settings.py file:
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://127.0.0.1:6379/1',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
}
}
}
Now, when you run the server again, Redis will be used as the cache backend:
(venv)$ python manage.py runserver
Turn to the code. The HomePageView
view in products/views.py simply lists all products in the database:
class HomePageView(View):
template_name = 'products/home.html'
def get(self, request):
product_objects = Product.objects.all()
context = {
'products': product_objects
}
return render(request, self.template_name, context)
Let's add support for the low-level cache API to the product objects.
First, add the import to the top of products/views.py:
from django.core.cache import cache
Then, add the code for caching the products to the view:
class HomePageView(View):
template_name = 'products/home.html'
def get(self, request):
product_objects = cache.get('product_objects') # NEW
if product_objects is None: # NEW
product_objects = Product.objects.all()
cache.set('product_objects', product_objects) # NEW
context = {
'products': product_objects
}
return render(request, self.template_name, context)
Here, we first checked to see if there's a cache object with the name product_objects
in our default cache:
product_objects
.With the server running, navigate to http://127.0.0.1:8000 in your browser. Click on "Cache" in the right-hand menu of Django Debug Toolbar. You should see something similar to:
There were two cache calls:
product_objects
, resulting in a cache miss since the object doesn't exist.There was also one SQL query. Overall, the page took about 313 milliseconds to load.
Refresh the page in your browser:
This time, you should see a cache hit, which gets the cache object named product_objects
. Also, there were no SQL queries, and the page took about 234 milliseconds to load.
Try adding a new product, updating an existing product, and deleting a product. You won't see any of the changes at http://127.0.0.1:8000 until you manually invalidate the cache, by pressing the "Invalidate cache" button.
Next let's look at how to automatically invalidate the cache. In the previous article, we looked at how to invalidate the cache after a period of time (TTL). In this article, we'll look at how to invalidate the cache when something in the model changes -- e.g., when a product is added to the products table or when an existing product is either updated or deleted.
For this task we could use database signals:
Django includes a “signal dispatcher” which helps decoupled applications get notified when actions occur elsewhere in the framework. In a nutshell, signals allow certain senders to notify a set of receivers that some action has taken place. They’re especially useful when many pieces of code may be interested in the same events.
To set up signals for handling cache invalidation, start by updating products/apps.py like so:
from django.apps import AppConfig
class ProductsConfig(AppConfig):
name = 'products'
def ready(self): # NEW
import products.signals # NEW
Next, create a file called signals.py in the "products" directory:
from django.core.cache import cache
from django.db.models.signals import post_delete, post_save
from django.dispatch import receiver
from .models import Product
@receiver(post_delete, sender=Product, dispatch_uid='post_deleted')
def object_post_delete_handler(sender, **kwargs):
cache.delete('product_objects')
@receiver(post_save, sender=Product, dispatch_uid='posts_updated')
def object_post_save_handler(sender, **kwargs):
cache.delete('product_objects')
Here, we used the receiver
decorator from django.dispatch
to decorate two functions that get called when a product is added or deleted, respectively. Let's look at the arguments:
save
or delete
.Product
model in which to receive signals from.dispatch_uid
to prevent duplicate signals.So, when either a save or delete occurs against the Product
model, the delete
method on the cache object is called to remove the contents of the product_objects
cache.
To see this in action, either start or restart the server and navigate to http://127.0.0.1:8000 in your browser. Open the "Cache" tab in the Django Debug Toolbar. You should see one cache miss. Refresh, and you should have no cache misses and one cache hit. Close the Debug Toolbar page. Then, click the "New product" button to add a new product. You should be redirected back to the homepage after you click "Save". This time, you should see one cache miss, indicating that the signal worked. Also, your new product should be seen at the top of the product list.
What about an update?
The post_save
signal is triggered if you update an item like so:
product = Product.objects.get(id=1)
product.title = 'A new title'
product.save()
However, post_save
won't be triggered if you perform an update
on the model via a QuerySet
:
Product.objects.filter(id=1).update(title='A new title')
Take note of the ProductUpdateView
:
class ProductUpdateView(UpdateView):
model = Product
fields = ['title', 'price']
template_name = 'products/product_update.html'
# we overrode the post method for testing purposes
def post(self, request, *args, **kwargs):
self.object = self.get_object()
Product.objects.filter(id=self.object.id).update(
title=request.POST.get('title'),
price=request.POST.get('price')
)
return HttpResponseRedirect(reverse_lazy('home'))
So, in order to trigger the post_save
, let's override the queryset update()
method. Start by creating a custom QuerySet
and a custom Manager
. At the top of products/models.py, add the following lines:
from django.core.cache import cache # NEW
from django.db import models
from django.db.models import QuerySet, Manager # NEW
from django.utils import timezone # NEW
Next, let's add the following code to products/models.py right above the Product
class:
class CustomQuerySet(QuerySet):
def update(self, **kwargs):
cache.delete('product_objects')
super(CustomQuerySet, self).update(updated=timezone.now(), **kwargs)
class CustomManager(Manager):
def get_queryset(self):
return CustomQuerySet(self.model, using=self._db)
Here, we created a custom Manager
, which has a single job: To return our custom QuerySet
. In our custom QuerySet
, we overrode the update()
method to first delete the cache key and then perform the QuerySet
update per usual.
For this to be used by our code, you also need to update Product
like so:
class Product(models.Model):
title = models.CharField(max_length=200, blank=False)
price = models.CharField(max_length=20, blank=False)
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
objects = CustomManager() # NEW
class Meta:
ordering = ['-created']
Full file:
from django.core.cache import cache
from django.db import models
from django.db.models import QuerySet, Manager
from django.utils import timezone
class CustomQuerySet(QuerySet):
def update(self, **kwargs):
cache.delete('product_objects')
super(CustomQuerySet, self).update(updated=timezone.now(), **kwargs)
class CustomManager(Manager):
def get_queryset(self):
return CustomQuerySet(self.model, using=self._db)
class Product(models.Model):
title = models.CharField(max_length=200, blank=False)
price = models.CharField(max_length=20, blank=False)
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
objects = CustomManager()
class Meta:
ordering = ['-created']
Test this out.
Rather than using database signals, you could use a third-party package called Django Lifecycle, which helps make invalidation of cache easier and more readable:
This project provides a @hook decorator as well as a base model and mixin to add lifecycle hooks to your Django models. Django's built-in approach to offering lifecycle hooks is Signals. However, my team often finds that Signals introduce unnecessary indirection and are at odds with Django's "fat models" approach.
To switch to using Django Lifecycle, kill the server, and then update products/app.py like so:
from django.apps import AppConfig
class ProductsConfig(AppConfig):
name = 'products'
Next, add Django Lifecycle to requirements.txt:
Django==3.1.13
django-debug-toolbar==3.2.1
django-lifecycle==0.9.1 # NEW
django-redis==5.0.0
redis==3.5.3
Install the new requirements:
(venv)$ pip install -r requirements.txt
To use Lifecycle hooks, update products/models.py like so:
from django.core.cache import cache
from django.db import models
from django.db.models import QuerySet, Manager
from django_lifecycle import LifecycleModel, hook, AFTER_DELETE, AFTER_SAVE # NEW
from django.utils import timezone
class CustomQuerySet(QuerySet):
def update(self, **kwargs):
cache.delete('product_objects')
super(CustomQuerySet, self).update(updated=timezone.now(), **kwargs)
class CustomManager(Manager):
def get_queryset(self):
return CustomQuerySet(self.model, using=self._db)
class Product(LifecycleModel): # NEW
title = models.CharField(max_length=200, blank=False)
price = models.CharField(max_length=20, blank=False)
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
objects = CustomManager()
class Meta:
ordering = ['-created']
@hook(AFTER_SAVE) # NEW
@hook(AFTER_DELETE) # NEW
def invalidate_cache(self): # NEW
cache.delete('product_objects') # NEW
In the code above, we:
LifecycleModel
rather than django.db.models
invalidate_cache
method that deletes the product_object
cache key@hook
decorators to specify the events that we want to "hook" intoTest this out in your browser by-
As with django signals
the hooks won't trigger if we do update via a QuerySet
like in the previously mentioned example:
Product.objects.filter(id=1).update(title="A new title")
In this case, we still need to create a custom Manager
and QuerySet
as we showed before.
Test out editing and deleting products as well.
Thus far, we've used the cache.get
, cache.set
, and cache.delete
methods to get, set, and delete (for invalidation) objects in the cache. Let's take a look at some more methods from django.core.cache.cache
.
Gets the specified key if present. If it's not present, it sets the key.
Syntax
cache.get_or_set(key, default, timeout=DEFAULT_TIMEOUT, version=None)
The timeout
parameter is used to set for how long (in seconds) the cache will be valid. Setting it to None
will cache the value forever. Omitting it will use the timeout, if any, that is set in setting.py
in the CACHES
setting
Many of the cache methods also include a version
parameter. With this parameter you can set or access different versions of the same cache key.
Example
>>> from django.core.cache import cache
>>> cache.get_or_set('my_key', 'my new value')
'my new value'
We could have used this in our view instead of using the if statements:
# current implementation
product_objects = cache.get('product_objects')
if product_objects is None:
product_objects = Product.objects.all()
cache.set('product_objects', product_objects)
# with get_or_set
product_objects = cache.get_or_set('product_objects', product_objects)
Used to set multiple keys at once by passing a dictionary of key-value pairs.
Syntax
cache.set_many(dict, timeout)
Example
>>> cache.set_many({'my_first_key': 1, 'my_second_key': 2, 'my_third_key': 3})
Used to get multiple cache objects at once. It returns a dictionary with the keys specified as parameters to the method, as long as they exist and haven't expired.
Syntax
cache.get_many(keys, version=None)
Example
>>> cache.get_many(['my_key', 'my_first_key', 'my_second_key', 'my_third_key'])
OrderedDict([('my_key', 'my new value'), ('my_first_key', 1), ('my_second_key', 2), ('my_third_key', 3)])
If you want to update the expiration for a certain key, you can use this method. The timeout value is set in the timeout parameter in seconds.
Syntax
cache.touch(key, timeout=DEFAULT_TIMEOUT, version=None)
Example
>>> cache.set('sample', 'just a sample', timeout=120)
>>> cache.touch('sample', timeout=180)
These two methods can be used to increment or decrement a value of a key that already exists. If the methods are used on a nonexistent cache key it will return a ValueError
.
In the case of not specifying the delta parameter the value will be increased/decreased by 1.
Syntax
cache.incr(key, delta=1, version=None)
cache.decr(key, delta=1, version=None)
Example
>>> cache.set('my_first_key', 1)
>>> cache.incr('my_first_key')
2
>>>
>>> cache.incr('my_first_key', 10)
12
To close the connection to your cache you use the close()
method.
Syntax
cache.close()
Example
cache.close()
To delete all the keys in the cache at once you can use this method. Just keep in mind that it will remove everything from the cache, not just the keys your application has set.
Syntax
cache.clear()
Example
cache.clear()
In this article, we looked at the low-level cache API in Django. We extended a demo project to use low-level caching and also invalidated the cache using Django's database signals and the Django Lifecycle hooks third-party package.
We also provided an overview of all the available methods in the Django low-level cache API together with examples of how to use them.
You can find the final code in the django-low-level-cache repo.
--
Django Caching Articles:
Original article source at: https://testdriven.io/blog/