dev karmanr

1634061747

Compatible with old network pickles StyleGAN2 but differ dramatically

Alias-Free Generative Adversarial Networks
Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, Timo Aila
https://nvlabs.github.io/stylegan3

Abstract: We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. Our results pave the way for generative models better suited for video and animation.

venom-let-there-be-carnage-2021-online-free-full-hd-4k

horror-venom-let-there-be-carnage-2021-online-full-free

torrent-venom-let-there-be-carnage-2021-online-full-hd

torrent-after-we-fell-2021-online-free-full-hd

sexy-films-after-we-fell-2021-online-free-full-hd

123movies-after-we-fell-2021-online-free-full-hd

official-venom-let-there-be-carnage-2021-online-full-hd

stream-venom-let-there-be-carnage-2021-full-online

stream-after-we-fell-2021-full-online

after-we-fell-2021-online-free-full-hd-4k

torrent-shang-chi-2021-online-free-full-hd

box-office-shang-chi-2021-online-free-full-hd

 

official-shang-chi-2021-online-free-full-hd

123movies-shang-chi-2021-online-free-full-hd

after-we-fell-2021-online-free-full-version-hd

after-we-fell-2021-online-free-full-all-version

stream-shang-chi-2021-full-online

shang-chi-2021-online-free-full-hd-4k

shang-chi-2021-online-free-full-hd-123movies

shang-chi-2021-online-free-full-version-hd

shang-chi-2021-online-free-full-all-version

shang-chi

afterwefell

venom2

For business inquiries, please contact researchinquiries@nvidia.com
For press and other inquiries, please contact Hector Marinez at hmarinez@nvidia.com

Release notes
This repository is an updated version of stylegan2-ada-pytorch, with several new features:

Alias-free generator architecture and training configurations (stylegan3-t, stylegan3-r).
Tools for interactive visualization (visualizer.py), spectral analysis (avg_spectra.py), and video generation (gen_video.py).
Equivariance metrics (eqt50k_int, eqt50k_frac, eqr50k).
General improvements: reduced memory usage, slightly faster training, bug fixes.
Compatibility:

Compatible with old network pickles created using stylegan2-ada and stylegan2-ada-pytorch.
Supports old StyleGAN2 training configurations, including ADA and transfer learning. See Training configurations for details.
Improved compatibility with Ampere GPUs and newer versions of PyTorch, CuDNN, etc.
Synthetic image detection
While new generator approaches enable new media synthesis capabilities, they may also present a new challenge for AI forensics algorithms for detection and attribution of synthetic media. In collaboration with digital forensic researchers participating in DARPA's SemaFor program, we curated a synthetic image dataset that allowed the researchers to test and validate the performance of their image detectors in advance of the public release. Please see here for more details.

Additional material
Result videos
Curated example images
StyleGAN3 pre-trained models for config T (translation equiv.) and config R (translation and rotation equiv.)
Access individual networks via https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/<MODEL>, where <MODEL> is one of:
stylegan3-t-ffhq-1024x1024.pkl, stylegan3-t-ffhqu-1024x1024.pkl, stylegan3-t-ffhqu-256x256.pkl
stylegan3-r-ffhq-1024x1024.pkl, stylegan3-r-ffhqu-1024x1024.pkl, stylegan3-r-ffhqu-256x256.pkl
stylegan3-t-metfaces-1024x1024.pkl, stylegan3-t-metfacesu-1024x1024.pkl
stylegan3-r-metfaces-1024x1024.pkl, stylegan3-r-metfacesu-1024x1024.pkl
stylegan3-t-afhqv2-512x512.pkl
stylegan3-r-afhqv2-512x512.pkl

StyleGAN2 pre-trained models compatible with this codebase
Access individual networks via https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/<MODEL>, where <MODEL> is one of:
stylegan2-ffhq-1024x1024.pkl, stylegan2-ffhq-512x512.pkl, stylegan2-ffhq-256x256.pkl
stylegan2-ffhqu-1024x1024.pkl, stylegan2-ffhqu-256x256.pkl
stylegan2-metfaces-1024x1024.pkl, stylegan2-metfacesu-1024x1024.pkl
stylegan2-afhqv2-512x512.pkl
stylegan2-afhqcat-512x512.pkl, stylegan2-afhqdog-512x512.pkl, stylegan2-afhqwild-512x512.pkl
stylegan2-brecahad-512x512.pkl, stylegan2-cifar10-32x32.pkl
stylegan2-celebahq-256x256.pkl, stylegan2-lsundog-256x256.pkl

Requirements
Linux and Windows are supported, but we recommend Linux for performance and compatibility reasons.
1–8 high-end NVIDIA GPUs with at least 12 GB of memory. We have done all testing and development using Tesla V100 and A100 GPUs.
64-bit Python 3.8 and PyTorch 1.9.0 (or later). See https://pytorch.org for PyTorch install instructions.
CUDA toolkit 11.1 or later. (Why is a separate CUDA toolkit installation required? See Troubleshooting).
Python libraries: see environment.yml for exact library dependencies. You can use the following commands with Miniconda3 to create and activate your StyleGAN3 Python environment:
conda env create -f environment.yml
conda activate stylegan3
Docker users:
Ensure you have correctly installed the NVIDIA container runtime.
Use the provided Dockerfile to build an image with the required library dependencies.
The code relies heavily on custom PyTorch extensions that are compiled on the fly using NVCC. On Windows, the compilation requires Microsoft Visual Studio. We recommend installing Visual Studio Community Edition and adding it into PATH using "C:\Program Files (x86)\Microsoft Visual Studio\<VERSION>\Community\VC\Auxiliary\Build\vcvars64.bat".

See Troubleshooting for help on common installation and run-time problems.

Getting started
Pre-trained networks are stored as *.pkl files that can be referenced using local filenames or URLs:

# Generate an image using pre-trained AFHQv2 model ("Ours" in Figure 1, left).
python gen_images.py --outdir=out --trunc=1 --seeds=2 \
   --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl

# Render a 4x2 grid of interpolations for seeds 0 through 31.
python gen_video.py --output=lerp.mp4 --trunc=1 --seeds=0-31 --grid=4x2 \
   --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl
Outputs from the above commands are placed under out/*.png, controlled by --outdir. Downloaded network pickles are cached under $HOME/.cache/dnnlib, which can be overridden by setting the DNNLIB_CACHE_DIR environment variable. The default PyTorch extension build directory is $HOME/.cache/torch_extensions, which can be overridden by setting TORCH_EXTENSIONS_DIR.

Docker: You can run the above curated image example using Docker as follows:

# Build the stylegan3:latest image
docker build --tag stylegan3 .

# Run the gen_images.py script using Docker:
docker run --gpus all -it --rm --user $(id -u):$(id -g) \
   -v `pwd`:/scratch --workdir /scratch -e HOME=/scratch \
   stylegan3 \
   python gen_images.py --outdir=out --trunc=1 --seeds=2 \
        --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl
Note: The Docker image requires NVIDIA driver release r470 or later.

The docker run invocation may look daunting, so let's unpack its contents here:

--gpus all -it --rm --user $(id -u):$(id -g): with all GPUs enabled, run an interactive session with current user's UID/GID to avoid Docker writing files as root.
-v `pwd`:/scratch --workdir /scratch: mount current running dir (e.g., the top of this git repo on your host machine) to /scratch in the container and use that as the current working dir.
-e HOME=/scratch: let PyTorch and StyleGAN3 code know where to cache temporary files such as pre-trained models and custom PyTorch extension build results. Note: if you want more fine-grained control, you can instead set TORCH_EXTENSIONS_DIR (for custom extensions build dir) and DNNLIB_CACHE_DIR (for pre-trained model download cache). You want these cache dirs to reside on persistent volumes so that their contents are retained across multiple docker run invocations.
Interactive visualization
This release contains an interactive model visualization tool that can be used to explore various characteristics of a trained model. To start it, run:

python visualizer.py
Visualizer screenshot

Using networks from Python
You can use pre-trained networks in your own Python code as follows:

with open('ffhq.pkl', 'rb') as f:
   G = pickle.load(f)['G_ema'].cuda()  # torch.nn.Module
z = torch.randn([1, G.z_dim]).cuda()    # latent codes
c = None                                # class labels (not used in this example)
img = G(z, c)                           # NCHW, float32, dynamic range [-1, +1], no truncation
The above code requires torch_utils and dnnlib to be accessible via PYTHONPATH. It does not need source code for the networks themselves — their class definitions are loaded from the pickle via torch_utils.persistence.

The pickle contains three networks. 'G' and 'D' are instantaneous snapshots taken during training, and 'G_ema' represents a moving average of the generator weights over several training steps. The networks are regular instances of torch.nn.Module, with all of their parameters and buffers placed on the CPU at import and gradient computation disabled by default.

The generator consists of two submodules, G.mapping and G.synthesis, that can be executed separately. They also support various additional options:

w = G.mapping(z, c, truncation_psi=0.5, truncation_cutoff=8)
img = G.synthesis(w, noise_mode='const', force_fp32=True)
Please refer to gen_images.py for complete code example.

Preparing datasets
Datasets are stored as uncompressed ZIP archives containing uncompressed PNG files and a metadata file dataset.json for labels. Custom datasets can be created from a folder containing images; see python dataset_tool.py --help for more information. Alternatively, the folder can also be used directly as a dataset, without running it through dataset_tool.py first, but doing so may lead to suboptimal performance.

FFHQ: Download the Flickr-Faces-HQ dataset as 1024x1024 images and create a zip archive using dataset_tool.py:

# Original 1024x1024 resolution.
python dataset_tool.py --source=/tmp/images1024x1024 --dest=~/datasets/ffhq-1024x1024.zip

# Scaled down 256x256 resolution.
python dataset_tool.py --source=/tmp/images1024x1024 --dest=~/datasets/ffhq-256x256.zip \
   --width=256 --height=256
See the FFHQ README for information on how to obtain the unaligned FFHQ dataset images. Use the same steps as above to create a ZIP archive for training and validation.

MetFaces: Download the MetFaces dataset and create a ZIP archive:

python dataset_tool.py --source=~/downloads/metfaces/images --dest=~/datasets/metfaces-1024x1024.zip
See the MetFaces README for information on how to obtain the unaligned MetFaces dataset images. Use the same steps as above to create a ZIP archive for training and validation.

AFHQv2: Download the AFHQv2 dataset and create a ZIP archive:

python dataset_tool.py --source=~/downloads/afhqv2 --dest=~/datasets/afhqv2-512x512.zip
Note that the above command creates a single combined dataset using all images of all three classes (cats, dogs, and wild animals), matching the setup used in the StyleGAN3 paper. Alternatively, you can also create a separate dataset for each class:

python dataset_tool.py --source=~/downloads/afhqv2/train/cat --dest=~/datasets/afhqv2cat-512x512.zip
python dataset_tool.py --source=~/downloads/afhqv2/train/dog --dest=~/datasets/afhqv2dog-512x512.zip
python dataset_tool.py --source=~/downloads/afhqv2/train/wild --dest=~/datasets/afhqv2wild-512x512.zip
Training
You can train new networks using train.py. For example:

# Train StyleGAN3-T for AFHQv2 using 8 GPUs.
python train.py --outdir=~/training-runs --cfg=stylegan3-t --data=~/datasets/afhqv2-512x512.zip \
   --gpus=8 --batch=32 --gamma=8.2 --mirror=1

# Fine-tune StyleGAN3-R for MetFaces-U using 1 GPU, starting from the pre-trained FFHQ-U pickle.
python train.py --outdir=~/training-runs --cfg=stylegan3-r --data=~/datasets/metfacesu-1024x1024.zip \
   --gpus=8 --batch=32 --gamma=6.6 --mirror=1 --kimg=5000 --snap=5 \
   --resume=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhqu-1024x1024.pkl

# Train StyleGAN2 for FFHQ at 1024x1024 resolution using 8 GPUs.
python train.py --outdir=~/training-runs --cfg=stylegan2 --data=~/datasets/ffhq-1024x1024.zip \
   --gpus=8 --batch=32 --gamma=10 --mirror=1 --aug=noaug
Note that the result quality and training time depend heavily on the exact set of options. The most important ones (--gpus, --batch, and --gamma) must be specified explicitly, and they should be selected with care. See python train.py --help for the full list of options and Training configurations for general guidelines & recommendations, along with the expected training speed & memory usage in different scenarios.

The results of each training run are saved to a newly created directory, for example ~/training-runs/00000-stylegan3-t-afhqv2-512x512-gpus8-batch32-gamma8.2. The training loop exports network pickles (network-snapshot-<KIMG>.pkl) and random image grids (fakes<KIMG>.png) at regular intervals (controlled by --snap). For each exported pickle, it evaluates FID (controlled by --metrics) and logs the result in metric-fid50k_full.jsonl. It also records various statistics in training_stats.jsonl, as well as *.tfevents if TensorBoard is installed.

Quality metrics
By default, train.py automatically computes FID for each network pickle exported during training. We recommend inspecting metric-fid50k_full.jsonl (or TensorBoard) at regular intervals to monitor the training progress. When desired, the automatic computation can be disabled with --metrics=none to speed up the training slightly.

Additional quality metrics can also be computed after the training:

# Previous training run: look up options automatically, save result to JSONL file.
python calc_metrics.py --metrics=eqt50k_int,eqr50k \
   --network=~/training-runs/00000-stylegan3-r-mydataset/network-snapshot-000000.pkl


   --dest=tmp/training-data.npz --mean=112.684 --std=69.509

# Calculate average spectrum for a pre-trained generator.
python avg_spectra.py calc \
   --source=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhq-1024x1024.pkl \
   --dest=tmp/stylegan3-r.npz --mean=112.684 --std=69.509 --num=70000

# Display results.
python avg_spectra.py heatmap tmp/training-data.npz
python avg_spectra.py heatmap tmp/stylegan3-r.npz
python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz

What is GEEK

Buddha Community

Marlon  Boyle

Marlon Boyle

1594312560

Autonomous Driving Network (ADN) On Its Way

Talking about inspiration in the networking industry, nothing more than Autonomous Driving Network (ADN). You may hear about this and wondering what this is about, and does it have anything to do with autonomous driving vehicles? Your guess is right; the ADN concept is derived from or inspired by the rapid development of the autonomous driving car in recent years.

Image for post

Driverless Car of the Future, the advertisement for “America’s Electric Light and Power Companies,” Saturday Evening Post, the 1950s.

The vision of autonomous driving has been around for more than 70 years. But engineers continuously make attempts to achieve the idea without too much success. The concept stayed as a fiction for a long time. In 2004, the US Defense Advanced Research Projects Administration (DARPA) organized the Grand Challenge for autonomous vehicles for teams to compete for the grand prize of $1 million. I remembered watching TV and saw those competing vehicles, behaved like driven by drunk man, had a really tough time to drive by itself. I thought that autonomous driving vision would still have a long way to go. To my surprise, the next year, 2005, Stanford University’s vehicles autonomously drove 131 miles in California’s Mojave desert without a scratch and took the $1 million Grand Challenge prize. How was that possible? Later I learned that the secret ingredient to make this possible was using the latest ML (Machine Learning) enabled AI (Artificial Intelligent ) technology.

Since then, AI technologies advanced rapidly and been implemented in all verticals. Around the 2016 time frame, the concept of Autonomous Driving Network started to emerge by combining AI and network to achieve network operational autonomy. The automation concept is nothing new in the networking industry; network operations are continually being automated here and there. But this time, ADN is beyond automating mundane tasks; it reaches a whole new level. With the help of AI technologies and other critical ingredients advancement like SDN (Software Defined Network), autonomous networking has a great chance from a vision to future reality.

In this article, we will examine some critical components of the ADN, current landscape, and factors that are important for ADN to be a success.

The Vision

At the current stage, there are different terminologies to describe ADN vision by various organizations.
Image for post

Even though slightly different terminologies, the industry is moving towards some common terms and consensus called autonomous networks, e.g. TMF, ETSI, ITU-T, GSMA. The core vision includes business and network aspects. The autonomous network delivers the “hyper-loop” from business requirements all the way to network and device layers.

On the network layer, it contains the below critical aspects:

  • Intent-Driven: Understand the operator’s business intent and automatically translate it into necessary network operations. The operation can be a one-time operation like disconnect a connection service or continuous operations like maintaining a specified SLA (Service Level Agreement) at the all-time.
  • **Self-Discover: **Automatically discover hardware/software changes in the network and populate the changes to the necessary subsystems to maintain always-sync state.
  • **Self-Config/Self-Organize: **Whenever network changes happen, automatically configure corresponding hardware/software parameters such that the network is at the pre-defined target states.
  • **Self-Monitor: **Constantly monitor networks/services operation states and health conditions automatically.
  • Auto-Detect: Detect network faults, abnormalities, and intrusions automatically.
  • **Self-Diagnose: **Automatically conduct an inference process to figure out the root causes of issues.
  • **Self-Healing: **Automatically take necessary actions to address issues and bring the networks/services back to the desired state.
  • **Self-Report: **Automatically communicate with its environment and exchange necessary information.
  • Automated common operational scenarios: Automatically perform operations like network planning, customer and service onboarding, network change management.

On top of those, these capabilities need to be across multiple services, multiple domains, and the entire lifecycle(TMF, 2019).

No doubt, this is the most ambitious goal that the networking industry has ever aimed at. It has been described as the “end-state” and“ultimate goal” of networking evolution. This is not just a vision on PPT, the networking industry already on the move toward the goal.

David Wang, Huawei’s Executive Director of the Board and President of Products & Solutions, said in his 2018 Ultra-Broadband Forum(UBBF) keynote speech. (David W. 2018):

“In a fully connected and intelligent era, autonomous driving is becoming a reality. Industries like automotive, aerospace, and manufacturing are modernizing and renewing themselves by introducing autonomous technologies. However, the telecom sector is facing a major structural problem: Networks are growing year by year, but OPEX is growing faster than revenue. What’s more, it takes 100 times more effort for telecom operators to maintain their networks than OTT players. Therefore, it’s imperative that telecom operators build autonomous driving networks.”

Juniper CEO Rami Rahim said in his keynote at the company’s virtual AI event: (CRN, 2020)

“The goal now is a self-driving network. The call to action is to embrace the change. We can all benefit from putting more time into higher-layer activities, like keeping distributors out of the business. The future, I truly believe, is about getting the network out of the way. It is time for the infrastructure to take a back seat to the self-driving network.”

Is This Vision Achievable?

If you asked me this question 15 years ago, my answer would be “no chance” as I could not imagine an autonomous driving vehicle was possible then. But now, the vision is not far-fetch anymore not only because of ML/AI technology rapid advancement but other key building blocks are made significant progress, just name a few key building blocks:

  • software-defined networking (SDN) control
  • industry-standard models and open APIs
  • Real-time analytics/telemetry
  • big data processing
  • cross-domain orchestration
  • programmable infrastructure
  • cloud-native virtualized network functions (VNF)
  • DevOps agile development process
  • everything-as-service design paradigm
  • intelligent process automation
  • edge computing
  • cloud infrastructure
  • programing paradigm suitable for building an autonomous system . i.e., teleo-reactive programs, which is a set of reactive rules that continuously sense the environment and trigger actions whose continuous execution eventually leads the system to satisfy a goal. (Nils Nilsson, 1996)
  • open-source solutions

#network-automation #autonomous-network #ai-in-network #self-driving-network #neural-networks

dev karmanr

1634061747

Compatible with old network pickles StyleGAN2 but differ dramatically

Alias-Free Generative Adversarial Networks
Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, Timo Aila
https://nvlabs.github.io/stylegan3

Abstract: We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. Our results pave the way for generative models better suited for video and animation.

venom-let-there-be-carnage-2021-online-free-full-hd-4k

horror-venom-let-there-be-carnage-2021-online-full-free

torrent-venom-let-there-be-carnage-2021-online-full-hd

torrent-after-we-fell-2021-online-free-full-hd

sexy-films-after-we-fell-2021-online-free-full-hd

123movies-after-we-fell-2021-online-free-full-hd

official-venom-let-there-be-carnage-2021-online-full-hd

stream-venom-let-there-be-carnage-2021-full-online

stream-after-we-fell-2021-full-online

after-we-fell-2021-online-free-full-hd-4k

torrent-shang-chi-2021-online-free-full-hd

box-office-shang-chi-2021-online-free-full-hd

 

official-shang-chi-2021-online-free-full-hd

123movies-shang-chi-2021-online-free-full-hd

after-we-fell-2021-online-free-full-version-hd

after-we-fell-2021-online-free-full-all-version

stream-shang-chi-2021-full-online

shang-chi-2021-online-free-full-hd-4k

shang-chi-2021-online-free-full-hd-123movies

shang-chi-2021-online-free-full-version-hd

shang-chi-2021-online-free-full-all-version

shang-chi

afterwefell

venom2

For business inquiries, please contact researchinquiries@nvidia.com
For press and other inquiries, please contact Hector Marinez at hmarinez@nvidia.com

Release notes
This repository is an updated version of stylegan2-ada-pytorch, with several new features:

Alias-free generator architecture and training configurations (stylegan3-t, stylegan3-r).
Tools for interactive visualization (visualizer.py), spectral analysis (avg_spectra.py), and video generation (gen_video.py).
Equivariance metrics (eqt50k_int, eqt50k_frac, eqr50k).
General improvements: reduced memory usage, slightly faster training, bug fixes.
Compatibility:

Compatible with old network pickles created using stylegan2-ada and stylegan2-ada-pytorch.
Supports old StyleGAN2 training configurations, including ADA and transfer learning. See Training configurations for details.
Improved compatibility with Ampere GPUs and newer versions of PyTorch, CuDNN, etc.
Synthetic image detection
While new generator approaches enable new media synthesis capabilities, they may also present a new challenge for AI forensics algorithms for detection and attribution of synthetic media. In collaboration with digital forensic researchers participating in DARPA's SemaFor program, we curated a synthetic image dataset that allowed the researchers to test and validate the performance of their image detectors in advance of the public release. Please see here for more details.

Additional material
Result videos
Curated example images
StyleGAN3 pre-trained models for config T (translation equiv.) and config R (translation and rotation equiv.)
Access individual networks via https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/<MODEL>, where <MODEL> is one of:
stylegan3-t-ffhq-1024x1024.pkl, stylegan3-t-ffhqu-1024x1024.pkl, stylegan3-t-ffhqu-256x256.pkl
stylegan3-r-ffhq-1024x1024.pkl, stylegan3-r-ffhqu-1024x1024.pkl, stylegan3-r-ffhqu-256x256.pkl
stylegan3-t-metfaces-1024x1024.pkl, stylegan3-t-metfacesu-1024x1024.pkl
stylegan3-r-metfaces-1024x1024.pkl, stylegan3-r-metfacesu-1024x1024.pkl
stylegan3-t-afhqv2-512x512.pkl
stylegan3-r-afhqv2-512x512.pkl

StyleGAN2 pre-trained models compatible with this codebase
Access individual networks via https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/<MODEL>, where <MODEL> is one of:
stylegan2-ffhq-1024x1024.pkl, stylegan2-ffhq-512x512.pkl, stylegan2-ffhq-256x256.pkl
stylegan2-ffhqu-1024x1024.pkl, stylegan2-ffhqu-256x256.pkl
stylegan2-metfaces-1024x1024.pkl, stylegan2-metfacesu-1024x1024.pkl
stylegan2-afhqv2-512x512.pkl
stylegan2-afhqcat-512x512.pkl, stylegan2-afhqdog-512x512.pkl, stylegan2-afhqwild-512x512.pkl
stylegan2-brecahad-512x512.pkl, stylegan2-cifar10-32x32.pkl
stylegan2-celebahq-256x256.pkl, stylegan2-lsundog-256x256.pkl

Requirements
Linux and Windows are supported, but we recommend Linux for performance and compatibility reasons.
1–8 high-end NVIDIA GPUs with at least 12 GB of memory. We have done all testing and development using Tesla V100 and A100 GPUs.
64-bit Python 3.8 and PyTorch 1.9.0 (or later). See https://pytorch.org for PyTorch install instructions.
CUDA toolkit 11.1 or later. (Why is a separate CUDA toolkit installation required? See Troubleshooting).
Python libraries: see environment.yml for exact library dependencies. You can use the following commands with Miniconda3 to create and activate your StyleGAN3 Python environment:
conda env create -f environment.yml
conda activate stylegan3
Docker users:
Ensure you have correctly installed the NVIDIA container runtime.
Use the provided Dockerfile to build an image with the required library dependencies.
The code relies heavily on custom PyTorch extensions that are compiled on the fly using NVCC. On Windows, the compilation requires Microsoft Visual Studio. We recommend installing Visual Studio Community Edition and adding it into PATH using "C:\Program Files (x86)\Microsoft Visual Studio\<VERSION>\Community\VC\Auxiliary\Build\vcvars64.bat".

See Troubleshooting for help on common installation and run-time problems.

Getting started
Pre-trained networks are stored as *.pkl files that can be referenced using local filenames or URLs:

# Generate an image using pre-trained AFHQv2 model ("Ours" in Figure 1, left).
python gen_images.py --outdir=out --trunc=1 --seeds=2 \
   --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl

# Render a 4x2 grid of interpolations for seeds 0 through 31.
python gen_video.py --output=lerp.mp4 --trunc=1 --seeds=0-31 --grid=4x2 \
   --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl
Outputs from the above commands are placed under out/*.png, controlled by --outdir. Downloaded network pickles are cached under $HOME/.cache/dnnlib, which can be overridden by setting the DNNLIB_CACHE_DIR environment variable. The default PyTorch extension build directory is $HOME/.cache/torch_extensions, which can be overridden by setting TORCH_EXTENSIONS_DIR.

Docker: You can run the above curated image example using Docker as follows:

# Build the stylegan3:latest image
docker build --tag stylegan3 .

# Run the gen_images.py script using Docker:
docker run --gpus all -it --rm --user $(id -u):$(id -g) \
   -v `pwd`:/scratch --workdir /scratch -e HOME=/scratch \
   stylegan3 \
   python gen_images.py --outdir=out --trunc=1 --seeds=2 \
        --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl
Note: The Docker image requires NVIDIA driver release r470 or later.

The docker run invocation may look daunting, so let's unpack its contents here:

--gpus all -it --rm --user $(id -u):$(id -g): with all GPUs enabled, run an interactive session with current user's UID/GID to avoid Docker writing files as root.
-v `pwd`:/scratch --workdir /scratch: mount current running dir (e.g., the top of this git repo on your host machine) to /scratch in the container and use that as the current working dir.
-e HOME=/scratch: let PyTorch and StyleGAN3 code know where to cache temporary files such as pre-trained models and custom PyTorch extension build results. Note: if you want more fine-grained control, you can instead set TORCH_EXTENSIONS_DIR (for custom extensions build dir) and DNNLIB_CACHE_DIR (for pre-trained model download cache). You want these cache dirs to reside on persistent volumes so that their contents are retained across multiple docker run invocations.
Interactive visualization
This release contains an interactive model visualization tool that can be used to explore various characteristics of a trained model. To start it, run:

python visualizer.py
Visualizer screenshot

Using networks from Python
You can use pre-trained networks in your own Python code as follows:

with open('ffhq.pkl', 'rb') as f:
   G = pickle.load(f)['G_ema'].cuda()  # torch.nn.Module
z = torch.randn([1, G.z_dim]).cuda()    # latent codes
c = None                                # class labels (not used in this example)
img = G(z, c)                           # NCHW, float32, dynamic range [-1, +1], no truncation
The above code requires torch_utils and dnnlib to be accessible via PYTHONPATH. It does not need source code for the networks themselves — their class definitions are loaded from the pickle via torch_utils.persistence.

The pickle contains three networks. 'G' and 'D' are instantaneous snapshots taken during training, and 'G_ema' represents a moving average of the generator weights over several training steps. The networks are regular instances of torch.nn.Module, with all of their parameters and buffers placed on the CPU at import and gradient computation disabled by default.

The generator consists of two submodules, G.mapping and G.synthesis, that can be executed separately. They also support various additional options:

w = G.mapping(z, c, truncation_psi=0.5, truncation_cutoff=8)
img = G.synthesis(w, noise_mode='const', force_fp32=True)
Please refer to gen_images.py for complete code example.

Preparing datasets
Datasets are stored as uncompressed ZIP archives containing uncompressed PNG files and a metadata file dataset.json for labels. Custom datasets can be created from a folder containing images; see python dataset_tool.py --help for more information. Alternatively, the folder can also be used directly as a dataset, without running it through dataset_tool.py first, but doing so may lead to suboptimal performance.

FFHQ: Download the Flickr-Faces-HQ dataset as 1024x1024 images and create a zip archive using dataset_tool.py:

# Original 1024x1024 resolution.
python dataset_tool.py --source=/tmp/images1024x1024 --dest=~/datasets/ffhq-1024x1024.zip

# Scaled down 256x256 resolution.
python dataset_tool.py --source=/tmp/images1024x1024 --dest=~/datasets/ffhq-256x256.zip \
   --width=256 --height=256
See the FFHQ README for information on how to obtain the unaligned FFHQ dataset images. Use the same steps as above to create a ZIP archive for training and validation.

MetFaces: Download the MetFaces dataset and create a ZIP archive:

python dataset_tool.py --source=~/downloads/metfaces/images --dest=~/datasets/metfaces-1024x1024.zip
See the MetFaces README for information on how to obtain the unaligned MetFaces dataset images. Use the same steps as above to create a ZIP archive for training and validation.

AFHQv2: Download the AFHQv2 dataset and create a ZIP archive:

python dataset_tool.py --source=~/downloads/afhqv2 --dest=~/datasets/afhqv2-512x512.zip
Note that the above command creates a single combined dataset using all images of all three classes (cats, dogs, and wild animals), matching the setup used in the StyleGAN3 paper. Alternatively, you can also create a separate dataset for each class:

python dataset_tool.py --source=~/downloads/afhqv2/train/cat --dest=~/datasets/afhqv2cat-512x512.zip
python dataset_tool.py --source=~/downloads/afhqv2/train/dog --dest=~/datasets/afhqv2dog-512x512.zip
python dataset_tool.py --source=~/downloads/afhqv2/train/wild --dest=~/datasets/afhqv2wild-512x512.zip
Training
You can train new networks using train.py. For example:

# Train StyleGAN3-T for AFHQv2 using 8 GPUs.
python train.py --outdir=~/training-runs --cfg=stylegan3-t --data=~/datasets/afhqv2-512x512.zip \
   --gpus=8 --batch=32 --gamma=8.2 --mirror=1

# Fine-tune StyleGAN3-R for MetFaces-U using 1 GPU, starting from the pre-trained FFHQ-U pickle.
python train.py --outdir=~/training-runs --cfg=stylegan3-r --data=~/datasets/metfacesu-1024x1024.zip \
   --gpus=8 --batch=32 --gamma=6.6 --mirror=1 --kimg=5000 --snap=5 \
   --resume=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhqu-1024x1024.pkl

# Train StyleGAN2 for FFHQ at 1024x1024 resolution using 8 GPUs.
python train.py --outdir=~/training-runs --cfg=stylegan2 --data=~/datasets/ffhq-1024x1024.zip \
   --gpus=8 --batch=32 --gamma=10 --mirror=1 --aug=noaug
Note that the result quality and training time depend heavily on the exact set of options. The most important ones (--gpus, --batch, and --gamma) must be specified explicitly, and they should be selected with care. See python train.py --help for the full list of options and Training configurations for general guidelines & recommendations, along with the expected training speed & memory usage in different scenarios.

The results of each training run are saved to a newly created directory, for example ~/training-runs/00000-stylegan3-t-afhqv2-512x512-gpus8-batch32-gamma8.2. The training loop exports network pickles (network-snapshot-<KIMG>.pkl) and random image grids (fakes<KIMG>.png) at regular intervals (controlled by --snap). For each exported pickle, it evaluates FID (controlled by --metrics) and logs the result in metric-fid50k_full.jsonl. It also records various statistics in training_stats.jsonl, as well as *.tfevents if TensorBoard is installed.

Quality metrics
By default, train.py automatically computes FID for each network pickle exported during training. We recommend inspecting metric-fid50k_full.jsonl (or TensorBoard) at regular intervals to monitor the training progress. When desired, the automatic computation can be disabled with --metrics=none to speed up the training slightly.

Additional quality metrics can also be computed after the training:

# Previous training run: look up options automatically, save result to JSONL file.
python calc_metrics.py --metrics=eqt50k_int,eqr50k \
   --network=~/training-runs/00000-stylegan3-r-mydataset/network-snapshot-000000.pkl


   --dest=tmp/training-data.npz --mean=112.684 --std=69.509

# Calculate average spectrum for a pre-trained generator.
python avg_spectra.py calc \
   --source=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhq-1024x1024.pkl \
   --dest=tmp/stylegan3-r.npz --mean=112.684 --std=69.509 --num=70000

# Display results.
python avg_spectra.py heatmap tmp/training-data.npz
python avg_spectra.py heatmap tmp/stylegan3-r.npz
python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz

Games DApp

Games DApp

1606981211

Matic Network in Blockchain Gaming

Matic Network is getting lots of attraction amidst the blockchain game developers. This is because, their competition has stepped away from the gaming scene. Matic - as a general purpose platform, capable of creating all types of DApps, and have already build 60+ DApps on Matic Network.

As a result Matic Network is busy gaining a lots of new gaming partners. They have already been integrated into many gaming DApps.

Key reasons why DApps chooses Matic Network

  • Near-instant blockchain transactions
  • Low Transaction fees >> less than 1/1000th of the fees on the Ethereum mainchain
  • Seamless migration for existing Ethereum DApps
  • Access to, and assistance with, a wide range of developer tooling.
  • Unparalleled technical support for developers.

If you have an idea to build your own Gaming DApp - you could benefit from matic network’s high-speed, low-fee infrastructure and our assistance to transform your DApp from a great idea into a successful DApp business.

Being a Predominant DApp Game Development Company, GamesDApp helps you to Build DApp Game on matic network and also expertize in developing various popular games on the blockchain network using smart contract.

Hire Blockchain Game Developers >> https://www.gamesd.app/#contactus

#matic network #build dapp game on matic network #dapp game on matic network #matic network in blockchain gaming #matic network for game development

antonio marsh

antonio marsh

1606393360

Avail our world-class XOXO Network Smart Contract and earn huge profits

Automate all your daily functions by indulging in XOXO Network MLM Clone. The advantages include an unstoppable protocol system, presence of the Ethereum smart contract, a peer to peer payments system, no chance of hacking or scamming, availability of seven global auto pools, limitless referral bonus, an initial investment of only 0.1 ETH, and a 100% secure system.

#xoxo network clone script #xoxo network mlm clone #xoxo network clone #smart contract mlm like xoxo network #xoxo network smart contract #xoxo smart contract clone script

jaz nive

1604314029

Avail our world-class XOXO Network Smart Contract and earn huge profits

Interested in launching a crypto trading platform on your own? Now, you can create your customized crypto platform like XOXO Network MLM Clone under the assistance of leading professional experts in the industry. In this article, let us discuss the key factors about the XOXO network clone script and its earning opportunities.

What is XOXO Smart Contract Clone Script?

XOXO smart contract clone script is one of the decentralized P2P smart contract-based clone scripts which is built on the Ethereum blockchain in order to offer an advanced trading experience to the users.

How does the registration process work in XOXO Network Clone Script?

The main objective of XOXO Network Clone Script is to make the registration process more simple and quick. In order to join the network, the user has to pay the entry fee 0.1ETH and then download either the Trust Wallet or the Meta Mask. After downloading the safe wallet, the user has to integrate it by adding the amount in ETH. Then he/she can follow the referral link which is sent by the existing user. After completing the registration process, the user can start referring others to the network. The user will receive rewards in the form of a commission instantly whenever anyone joins in the network with their referral link.

Specialized features in XOXO Network MLM Clone Script

The following are some of the features which made the XOXO Network MLM Clone Script a favourite one among the users globally.

  • It requires only less entrance fee to join the network
  • Follows streamlined protocol
  • Operates with no central authority
  • Immutable transactions
  • Completely transparent
  • Guaranteed results within a short period
  • High returns with less investment
  • Limitless referrals can be done
  • Safe wallets to provide better security against threats

If you are looking for the ways to boost your business viability, then investing in Smart Contract MLM Like XOXO Network is the best-preferred choice. Reach out to our well-experienced developer team to avail your customized profitable solution according to your business constraints.

#smart contract mlm like xoxo network #xoxo network clone #xoxo network clone platform #xoxo network mlm clone #xoxo network mlm clone development company