1679202360
Various single-file cross-platform C/C++ headers implementing self-contained libraries.
Generally these headers do not have dependencies and are intended to be included directly into your source (check each header for specific documentation at the top of the file). Each header has a LIBNAME_IMPLEMENTATION symbol; add this to a single translation unit in your code and include the header right after in order to define library symbols. Just include the header as normal otherwise.
Some headers also have example code or demos. In this repo just look for the corresponding examples or tests folders. The example folders are particularly useful for figuring out how to use a particular header.
Here's a link to the discord chat for cute_headers. Feel free to pop in and ask questions, make suggestions, or have a discussion. If anyone has used cute_headers it would be great to hear your experience! https://discord.gg/2DFHRmX
Another easy way to get a hold of me is on twitter @randypgaul.
- What's the point of making a single file? Why is there implementation and static functions in the headers?
Including these headers is like including a normal header. However, to define the implementation each header looks something like this:
// Do this ONCE in a .c/.cpp file
#define LIBNAME_IMPLEMENTATION
#include "libname.h"
// Everywhere else, just include like a typical header
#include "libname.h"
This will turn the file into a header + c file combo, one time. The point of this is: A) handling the header or sending it to people is easy, no zip files or anything just copy and paste a single file; B) build scripts are a pain in the ass, and these single-file libs can be integrated into any project without modifying a single build script.
- Doesn't writing all the code in a header ruin compile times?
The stigma that header implementations slow compile time come from inline'd code and template spam. In either case every single translation unit must churn through the header and place inline versions of functions, or for templates generate various type-specific functions. It gets worse once the linker kicks in and needs to coalesce translation units together, deleting duplicated symbols. Often linkers are single-threaded tasks and can really bottleneck build times.
A well constructed single-file header will not use any templates and make use of inline sparingly. Additionally well constructed single-file headers use a #define to place implementation (the function definitions and symbols) into a single translation unit. In this way a well crafted single-file header is pretty much the best thing a C compiler can come across, as far as build times go. Especially when the header can optionally #define out unneeded features.
- Aren't these header only libraries just a new fad?
I personally don't really know if it's a fad or not, but these files aren't really just headers. They are headers with the .C file part (the implementation) attached to the end. It's two different files stuck together with the C preprocessor, but the implementation part never shows up unless the user does #define LIB_IMPLEMENTATION. This define step is the only integration step required to use these headers.
Unfortunately writing a good header library is pretty hard, so just any random header lib out there in the wild is probably not a good one. The STB and RJM are pretty good header libs, and are a good reference to get an idea at what a good header lib looks like. Mattias Gustavsson has my favorite collection of headers.
- What is the license?
Each lib contains license info at the end of the file. There is a choice between public domain, and zlib.
- I was looking for a header I've seen before, but it's missing. Where did it go?
Some of the unpopular or not so useful headers became deprecated, and live here now.
library | description | latest version | language(s) |
---|---|---|---|
cute_c2 | 2D collision detection routines on primitives, boolean results and/or manifold generation, shape cast/sweep test, raycasts | 1.10 | C/C++ |
cute_net | Networking library for games requiring an optional reliability layer over UDP with a baked in security scheme | 1.02 | C/C++ |
cute_tiled | Very efficient loader for Tiled maps exported to JSON format | 1.07 | C/C++ |
cute_aseprite | Parses .ase/.aseprite files into a compact and convenient struct collection | 1.02 | C/C++ |
cute_sound | Load/play/loop (with plugin)/pan WAV + OGG (stb_vorbis wrapper for OGG) in mono/stereo, high performance custom mixer, music + crossfade support | 2.03 | C/C++ |
cute_math | Professional level 3D vector math via SSE intrinsics | 1.02 | C++ |
cute_png | load/save PNG, texture atlas compiler, DEFLATE compliant decompressor | 1.05 | C/C++ |
cute_spritebatch | Run-time 2d sprite batcher. Builds atlases on-the-fly in-memory. Useful to implement a sprite batcher for any purpose (like 2D games) for high-performance rendering, without the need to precompile texture atlases on-disk. | 1.04 | C/C++ |
cute_sync | Collection of practical synchronization primitives, including read/write lock and threadpool/task system | 1.01 | C/C++ |
Author: RandyGaul
Source Code: https://github.com/RandyGaul/cute_headers
1676176020
Glium is no longer actively developed by its original author. That said, PRs are still welcome and maintenance is continued by the surrounding community.
Elegant and safe OpenGL wrapper.
Glium is an intermediate layer between OpenGL and your application. You still need to manually handle the graphics pipeline, but without having to use OpenGL's old and error-prone API.
[dependencies]
glium = "*"
Its objectives:
If you have some knowledge of OpenGL, the documentation and the examples should get you easily started.
Easy to use:
Functions are higher level in glium than in OpenGL. Glium's API tries to be as Rusty as possible, and shouldn't be much different than using any other Rust library. Glium should allow you to do everything that OpenGL allows you to do, just through high-level functions. If something is missing, please open an issue.
You can directly pass vectors, matrices and images to glium instead of manipulating low-level data.
Thanks to glutin, glium is very easy to setup compared to raw OpenGL.
Glium provides easier ways to do common tasks. For example the VertexBuffer
struct contains information about the vertex bindings, because you usually don't use several different bindings with the same vertex buffer. This reduces the overall complexity of OpenGL.
Glium handles framebuffer objects, samplers, and vertex array objects for you. You no longer need to create them explicitly as they are automatically created when needed and destroyed when their corresponding object is destroyed.
Glium is stateless. There are no set_something()
functions in the entire library, and everything is done by parameter passing. The same set of function calls will always produce the same results, which greatly reduces the number of potential problems.
Safety:
Glium detects what would normally be errors or undefined behaviors in OpenGL, and panics, without calling glGetError
which would be too slow. Examples include requesting a depth test when you don't have a depth buffer available, not binding any value to an attribute or uniform, or binding multiple textures with different dimensions to the same framebuffer.
If the OpenGL context triggers an error, then you have found a bug in glium. Please open an issue. Just like Rust does everything it can to avoid crashes, glium does everything it can to avoid OpenGL errors.
The OpenGL context is automatically handled by glium. You don't need to worry about thread safety, as it is forbidden to change the thread in which OpenGL objects operate. Glium also allows you to safely replace the current OpenGL context with another one that shares the same lists.
Glium enforces RAII. Creating a Texture2d
struct creates a texture, and destroying the struct destroys the texture. It also uses Rust's borrow system to ensure that objects are still alive and in the right state when you use them. Glium provides the same guarantees with OpenGL objects that you have with regular objects in Rust.
High-level functions are much easier to use and thus less error-prone. For example there is no risk of making a mistake while specifying the names and offsets of your vertex attributes, since Glium automatically generates this data for you.
Robustness is automatically handled. If the OpenGL context is lost (because of a crash in the driver for example) then swapping buffers will return an error.
Compatibility:
In its default mode, Glium should be compatible with both OpenGL and OpenGL ES. If something doesn't work on OpenGL ES, please open an issue.
During initialization, Glium detects whether the context provides all the required functionality, and returns an Err
if the device is too old. Glium tries to be as tolerant as possible, and should work with the majority of the OpenGL2-era devices.
Glium will attempt to use the latest, optimized versions of OpenGL functions. This includes buffer and texture immutable storage and direct state access. It will automatically fall back to older functions if they are not available.
Glium comes with a set of tests that you can run with cargo test
. If your project/game doesn't work on specific hardware, you can try running Glium's tests on it to see what is wrong.
Performances:
State changes are optimized. The OpenGL state is only modified if the state actually differs. For example if you call draw
with the IfLess
depth test twice in a row, then glDepthFunc(GL_LESS)
and glEnable(GL_DEPTH_TEST)
will only be called the first time. If you then call draw
with IfGreater
, then only glDepthFunc(GL_GREATER)
will be called.
Just like Rust is theoretically slower than C because of additional safety checks, glium is theoretically slower than well-prepared and optimized raw OpenGL calls. However in practice the difference is very low.
Fully optimized OpenGL code uses advanced techniques such as persistent mapping or bindless textures. These are hard to do and error-prone, but trivially easy to do with glium. You can easily get a huge performance boost just by doing the right function calls.
Since glium automatically avoids all OpenGL errors, you can safely use the GL_KHR_no_error
extension when it is available. Using this extension should provide a good performance boost (but it is also very recent and not available anywhere for the moment).
Limitations:
Robustness isn't supported everywhere yet, so you can still get crashes if you do incorrect things in your shaders.
Glium gives you access to all the tools but doesn't prevent you from doing horribly slow things. Some knowledge of modern techniques is required if you want to reach maximum performances.
Glium pushes the Rust compiler to its limits. Stack overflows (inside the compiler), internal compiler errors, one-hour compile time, etc. happen more often than in smaller libraries.
Macros are yet work-in-progress; see glium-derive
for details.
Author: Glium
Source Code: https://github.com/glium/glium
License: Apache-2.0 license
1676172120
Gephi is an award-winning open-source platform for visualizing and manipulating large graphs. It runs on Windows, Mac OS X and Linux. Localization is available in English, French, Spanish, Japanese, Russian, Brazilian Portuguese, Chinese, Czech, German and Romanian.
Fast Powered by a built-in OpenGL engine, Gephi is able to push the envelope with very large networks. Visualize networks up to a million elements. All actions (e.g. layout, filter, drag) run in real-time.
Simple Easy to install and get started. An UI that is centered around the visualization. Like Photoshop™ for graphs.
Modular Extend Gephi with plug-ins. The architecture is built on top of Apache Netbeans Platform and can be extended or reused easily through well-written APIs.
Download Gephi for Windows, Mac OS X and Linux and consult the release notes. Example datasets can be found on our wiki.
Download and Install Gephi on your computer.
Get started with the Quick Start and follow the Tutorials. Load a sample dataset and start to play with the data.
If you run into any trouble or have questions consult our discussions.
Development builds are generated regularly. Current version is 0.10.2-SNAPSHOT
gephi-0.10.2-SNAPSHOT-windows-x64.exe (Windows)
gephi-0.10.2-SNAPSHOT-windows-x32.exe (Windows x32)
gephi-0.10.2-SNAPSHOT-macos-x64.dmg (Mac OS X)
gephi-0.10.2-SNAPSHOT-macos-aarch64.dmg (Mac OS X Silicon)
gephi-0.10.2-SNAPSHOT-linux-x64.tar.gz (Linux)
Gephi is developed in Java and uses OpenGL for its visualization engine. Built on the top of Netbeans Platform, it follows a loosely-coupled, modular architecture philosophy. Gephi is split into modules, which depend on other modules through well-written APIs. Plugins can reuse existing APIs, create new services and even replace a default implementation with a new one.
Consult the Javadoc for an overview of the APIs.
Java JDK 11 (or later)
Apache Maven version 3.6.3 or later
Fork the repository and clone
git clone git@github.com:username/gephi.git
Run the following command or open the project in an IDE
mvn -T 4 clean install
Once built, one can test running Gephi
cd modules/application
mvn nbm:cluster-app nbm:run-platform
Note that while Gephi can be built using JDK 11 or later, it currently requires JDK 11 to run.
Gephi is extensible and lets developers create plug-ins to add new features, or to modify existing features. For example, you can create a new layout algorithm, add a metric, create a filter or a tool, support a new file format or database, or modify the visualization.
Plugins Quick Start (5 minutes)
Browse the plugins created by the community
We've created a Plugins Bootcamp to learn by examples.
The Gephi Toolkit project packages essential Gephi modules (Graph, Layout, Filters, IO…) in a standard Java library which any Java project can use for getting things done. It can be used on a server or command-line tool to do the same things Gephi does but automatically.
We use Weblate for localization. Follow the guidelines on the wiki for more details how to contribute.
Gephi uses icons from various sources. The icons are licensed under the CC BY 3.0 license.
All icons can be found in the DesktopIcons
module, organised by module name.
Author: Gephi
Source Code: https://github.com/gephi/gephi
License: GPL-3.0, Unknown licenses found
1676168280
gfx-rs is a low-level, cross-platform graphics and compute abstraction library in Rust. It consists of the following components:
As of the v0.9 release, gfx-hal is now in maintenance mode. gfx-hal development was mainly driven by wgpu, which has now switched to its own GPU abstraction called wgpu-hal. For this reason, gfx-hal development has switched to maintenance only, until the developers figure out the story for gfx-portability. Read more about the transition in #3768.
gfx-hal
which is gfx's hardware abstraction layer: a Vulkan-ic mostly unsafe API which translates to native graphics backends.gfx-backend-*
which contains graphics backends for various platforms:gfx-warden
which is a data-driven reference test framework, used to verify consistency across all graphics backends.gfx-rs is hard to use, it's recommended for performance-sensitive libraries and engines. If that's not your domain, take a look at wgpu-rs for a safe and simple alternative.
The Hardware Abstraction Layer (HAL), is a thin, low-level graphics and compute layer which translates API calls to various backends, which allows for cross-platform support. The API of this layer is based on the Vulkan API, adapted to be more Rust-friendly.
Currently HAL has backends for Vulkan, DirectX 12/11, Metal, and OpenGL/OpenGL ES/WebGL.
The HAL layer is consumed directly by user applications or libraries. HAL is also used in efforts such as gfx-portability.
See the Big Picture blog post for connections.
gfx
crate (pre-ll)This repository was originally home to the gfx
crate, which is now deprecated. You can find the latest versions of the code for that crate in the pre-ll
branch of this repository.
The master branch of this repository is now focused on developing gfx-hal
and its associated backend and helper libraries, as described above. gfx-hal
is a complete rewrite of gfx
, but it is not necessarily the direct successor to gfx
. Instead, it serves a different purpose than the original gfx
crate, by being "lower level" than the original. Hence, the name of gfx-hal
was originally ll
, which stands for "lower level", and the original gfx
is now referred to as pre-ll
.
The spiritual successor to the original gfx
is actually wgpu
, which stands on a similar level of abstraction to the old gfx
crate, but with a modernized API that is more fit for being used over Vulkan/DX12/Metal. If you want something similar to the old gfx
crate that is being actively developed, wgpu
is probably what you're looking for, rather than gfx-hal
.
We are actively looking for new contributors and aim to be welcoming and helpful to anyone that is interested! We know the code base can be a bit intimidating in size and depth at first, and to this end we have a label on the issue tracker which marks issues that are new contributor friendly and have some basic direction for completion in the issue comments. If you have any questions about any of these issues (or any other issues) you may want to work on, please comment on GitHub and/or drop a message in our Matrix chat!
Author: gfx-rs
Source Code: https://github.com/gfx-rs/gfx
License: Apache-2.0, MIT licenses found
1676164500
Azul is a free, functional, reactive GUI framework for Rust, C and C++, built using the WebRender rendering engine and a CSS / HTML-like document object model for rapid development of beautiful, native desktop applications
Azul uses webrender (the rendering engine behind Firefox) to render your UI, so it supports lots of common CSS features like:
See the list of supported CSS keys / values for more info.
On top of that, Azul features...
* static linking not yet available
** C++ bindings and Python are not yet stabilized and might not work depending on the branch you're using. They will be stabilized before the release.
from azul import *
class DataModel:
def __init__(self, counter):
self.counter = counter
def render_dom(data, info):
label = Dom.text("{}".format(data.counter))
label.set_inline_style("font-size: 50px;")
button = Button("Increment counter")
button.set_on_click(data, increment_counter)
dom = Dom.body()
dom.add_child(label)
dom.add_child(button.dom())
return dom.style(Css.empty())
def increment_counter(data, info):
data.counter += 1;
return Update.RefreshDom
app = App(DataModel(5), AppConfig(LayoutSolver.Default))
app.run(WindowCreateOptions(render_dom))
use azul::prelude::*;
use azul::widgets::{button::Button, label::Label};
struct DataModel {
counter: usize,
}
extern "C"
fn render_dom(data: &mut RefAny, _: &mut LayoutInfo) -> StyledDom {
let data = data.downcast_ref::<DataModel>()?;
let label = Dom::text(format!("{}", data.counter))
.with_inline_style("font-size: 50px;");
let button = Button::new("Increment counter")
.onmouseup(increment_counter, data.clone());
Dom::body()
.with_child(label)
.with_child(button.dom())
.style(Css::empty())
}
extern "C"
fn increment_counter(data: &mut RefAny, _: &mut CallbackInfo) -> Update {
let mut data = data.downcast_mut::<DataModel>()?;
data.counter += 1;
Update::RefreshDom // call render_dom() again
}
fn main() {
let initial_data = RefAny::new(DataModel { counter: 0 });
let app = App::new(initial_data, AppConfig::default());
app.run(WindowCreateOptions::new(render_dom));
}
#include "azul.h"
typedef struct {
uint32_t counter;
} DataModel;
void DataModel_delete(DataModel* restrict A) { }
AZ_REFLECT(DataModel, DataModel_delete);
AzStyledDom render_dom(AzRefAny* data, AzLayoutInfo* info) {
DataModelRef d = DataModelRef_create(data);
if !(DataModel_downcastRef(data, &d)) {
return AzStyledDom_empty();
}
char buffer [20];
int written = snprintf(buffer, 20, "%d", d->counter);
AzString const labelstring = AzString_copyFromBytes(&buffer, 0, written);
AzDom label = AzDom_text(labelstring);
AzString const inline_css = AzString_fromConstStr("font-size: 50px;");
AzDom_setInlineStyle(&label, inline_css);
AzString const buttontext = AzString_fromConstStr("Increment counter");
AzButton button = AzButton_new(buttontext, AzRefAny_clone(data));
AzButton_setOnClick(&button, incrementCounter);
AzDom body = Dom_body();
AzDom_addChild(body, AzButton_dom(&button));
AzDom_addChild(body, label);
AzCss global_css = AzCss_empty();
return AzDom_style(body, global_css);
}
Update incrementCounter(RefAny* data, CallbackInfo* event) {
DataModelRefMut d = DataModelRefMut_create(data);
if !(DataModel_downcastRefMut(data, &d)) {
return Update_DoNothing;
}
d->ptr.counter += 1;
DataModelRefMut_delete(&d);
return Update_RefreshDom;
}
int main() {
DataModel model = { .counter = 5 };
AzApp app = AzApp_new(DataModel_upcast(model), AzAppConfig_default());
AzApp_run(app, AzWindowCreateOptions_new(render_dom));
return 0;
}
Author: fschutt
Source Code: https://github.com/fschutt/azul
License: MPL-2.0 and 3 other licenses found
1676160660
wgpu
is a cross-platform, safe, pure-rust graphics api. It runs natively on Vulkan, Metal, D3D12, D3D11, and OpenGLES; and on top of WebGPU on wasm.
The api is based on the WebGPU standard. It serves as the core of the WebGPU integration in Firefox, Servo, and Deno.
Minimum Supported Rust Version is 1.64. It is enforced on CI (in "/.github/workflows/ci.yml") with RUST_VERSION
variable. This version can only be upgraded in breaking releases.
The wgpu-core
, wgpu-hal
, and wgpu-types
crates should never require an MSRV ahead of Firefox's MSRV for nightly builds, as determined by the value of MINIMUM_RUST_VERSION
in python/mozboot/mozboot/util.py
. However, Firefox uses cargo vendor
to extract only those crates it actually uses, so the workspace's other crates can have more recent MSRVs.
Note for Rust 1.64: The workspace itself can even use a newer MSRV than Firefox, as long as the vendoring step's Cargo.toml
rewriting removes any features Firefox's MSRV couldn't handle. For example, wgpu
can use manifest key inheritance, added in Rust 1.64, even before Firefox reaches that MSRV, because cargo vendor
copies inherited values directly into the individual crates' Cargo.toml
files, producing 1.63-compatible files.
Rust examples can be found at wgpu/examples
. You can run the examples with cargo run --example name
. See the list of examples. For detailed instructions, look at Running the examples on the wiki.
If you are looking for a wgpu tutorial, look at the following:
To use wgpu in C/C++, you need wgpu-native.
If you want to use wgpu in other languages, there are many bindings to wgpu-native from languages such as Python, D, Julia, Kotlin, and more. See the list.
Angle is a translation layer from GLES to other backends, developed by Google. We support running our GLES3 backend over it in order to reach platforms with GLES2 or DX11 support, which aren't accessible otherwise. In order to run with Angle, "angle" feature has to be enabled, and Angle libraries placed in a location visible to the application. These binaries can be downloaded from gfbuild-angle artifacts, manual compilation may be required on Macs with Apple silicon.
On Windows, you generally need to copy them into the working directory, in the same directory as the executable, or somewhere in your path. On Linux, you can point to them using LD_LIBRARY_PATH
environment.
All testing and example infrastructure shares the same set of environment variables that determine which Backend/GPU it will run on.
WGPU_ADAPTER_NAME
with a substring of the name of the adapter you want to use (ex. 1080
will match NVIDIA GeForce 1080ti
).WGPU_BACKEND
with a comma separated list of the backends you want to use (vulkan
, metal
, dx12
, dx11
, or gl
).WGPU_POWER_PREF
with the power preference to choose when a specific adapter name isn't specified (high
or low
)WGPU_DX12_COMPILER
with the DX12 shader compiler you wish to use (dxc
or fxc
, note that dxc
requires dxil.dll
and dxcompiler.dll
to be in the working directory otherwise it will fall back to fxc
)When running the CTS, use the variables DENO_WEBGPU_ADAPTER_NAME
, DENO_WEBGPU_BACKEND
, DENO_WEBGPU_POWER_PREFERENCE
.
We have multiple methods of testing, each of which tests different qualities about wgpu. We automatically run our tests on CI if possible. The current state of CI testing:
Backend/Platform | Tests | CTS | Notes |
---|---|---|---|
DX12/Windows 10 | :heavy_check_mark: | :heavy_check_mark: | using WARP |
DX11/Windows 10 | :construction: | — | using WARP |
Metal/MacOS | — | — | metal requires GPU |
Vulkan/Linux | :heavy_check_mark: | :x: | using lavapipe, [cts hangs][cts-hang] |
GLES/Linux | :heavy_check_mark: | — | using llvmpipe |
We use a tool called cargo nextest
to run our tests. To install it, run cargo install cargo-nextest
.
To run the test suite on the default device:
cargo nextest run --no-fail-fast
wgpu-info
can run the tests once for each adapter on your system.
cargo run --bin wgpu-info -- cargo nextest run --no-fail-fast
Then to run an example's image comparison tests, run:
cargo nextest run <example-test-name> --no-fail-fast
Or run a part of the integration test suite:
cargo nextest run -p wgpu -- <name-of-test>
If you are a user and want a way to help contribute to wgpu, we always need more help writing test cases.
WebGPU includes a Conformance Test Suite to validate that implementations are working correctly. We can run this CTS against wgpu.
To run the CTS, first you need to check it out:
git clone https://github.com/gpuweb/cts.git
cd cts
# works in bash and powershell
git checkout $(cat ../cts_runner/revision.txt)
To run a given set of tests:
# Must be inside the cts folder we just checked out, else this will fail
cargo run --manifest-path ../cts_runner/Cargo.toml -- ./tools/run_deno --verbose "<test string>"
To find the full list of tests, go to the online cts viewer.
The list of currently enabled CTS tests can be found here.
The wgpu
crate is meant to be an idiomatic Rust translation of the WebGPU API. That specification, along with its shading language, WGSL, are both still in the "Working Draft" phase, and while the general outlines are stable, details change frequently. Until the specification is stabilized, the wgpu
crate and the version of WGSL it implements will likely differ from what is specified, as the implementation catches up.
Exactly which WGSL features wgpu
supports depends on how you are using it:
When running as native code, wgpu
uses the Naga crate to translate WGSL code into the shading language of your platform's native GPU API. Naga has a milestone for catching up to the WGSL specification, but in general there is no up-to-date summary of the differences between Naga and the WGSL spec.
When running in a web browser (by compilation to WebAssembly) without the "webgl"
feature enabled, wgpu
relies on the browser's own WebGPU implementation. WGSL shaders are simply passed through to the browser, so that determines which WGSL features you can use.
When running in a web browser with wgpu
's "webgl"
feature enabled, wgpu
uses Naga to translate WGSL programs into GLSL. This uses the same version of Naga as if you were running wgpu
as native code.
wgpu uses the coordinate systems of D3D and Metal:
The repository hosts the following libraries:
The following binaries:
cts_runner
- WebGPU Conformance Test Suite runner using deno_webgpu
.player
- standalone application for replaying the API traces.wgpu-info
- program that prints out information about all the adapters on the system or invokes a command for every adapter.For an overview of all the components in the gfx-rs ecosystem, see the big picture.
We have the Matrix space with a few different rooms that form the wgpu community:
We have a wiki that serves as a knowledge base.
API | Windows | Linux & Android | macOS & iOS |
---|---|---|---|
Vulkan | ✅ | ✅ | 🆗 (vulkan-portability) |
Metal | ✅ | ||
DX12 | ✅ (W10+ only) | ||
DX11 | 🛠️ | ||
GLES3 | 🆗 | ||
Angle | 🆗 | 🆗 | 🆗 (macOS only) |
✅ = First Class Support — 🆗 = Best Effort Support — 🛠️ = Unsupported, but support in progress
wgpu supports shaders in WGSL, SPIR-V, and GLSL. Both HLSL and GLSL have compilers to target SPIR-V. All of these shader languages can be used with any backend, we will handle all of the conversion. Additionally, support for these shader inputs is not going away.
While WebGPU does not support any shading language other than WGSL, we will automatically convert your non-WGSL shaders if you're running on WebGPU.
WGSL is always supported by default, but GLSL and SPIR-V need features enabled to compile in support.
Note that the WGSL specification is still under development, so the draft specification does not exactly describe what wgpu
supports. See below for details.
To enable SPIR-V shaders, enable the spirv
feature of wgpu. To enable GLSL shaders, enable the glsl
feature of wgpu.
Author: gfx-rs
Source Code: https://github.com/gfx-rs/wgpu
License: Apache-2.0, MIT licenses found
1676156580
Goal is to have something as similar to GPUImage as possible. Vertex and fragment shaders are exactly the same. That way it makes it easier to port filters from GPUImage iOS to Android.
repositories {
mavenCentral()
}
dependencies {
implementation 'jp.co.cyberagent.android:gpuimage:2.x.x'
}
Java:
@Override
public void onCreate(final Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity);
Uri imageUri = ...;
gpuImage = new GPUImage(this);
gpuImage.setGLSurfaceView((GLSurfaceView) findViewById(R.id.surfaceView));
gpuImage.setImage(imageUri); // this loads image on the current thread, should be run in a thread
gpuImage.setFilter(new GPUImageSepiaFilter());
// Later when image should be saved saved:
gpuImage.saveToPictures("GPUImage", "ImageWithFilter.jpg", null);
}
Kotlin:
public override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_gallery)
val imageUri: Uri = ...
gpuImage = GPUImage(this)
gpuImage.setGLSurfaceView(findViewById<GLSurfaceView>(R.id.surfaceView))
gpuImage.setImage(imageUri) // this loads image on the current thread, should be run in a thread
gpuImage.setFilter(GPUImageSepiaFilter())
// Later when image should be saved saved:
gpuImage.saveToPictures("GPUImage", "ImageWithFilter.jpg", null)
}
<jp.co.cyberagent.android.gpuimage.GPUImageView
android:id="@+id/gpuimageview"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:gpuimage_show_loading="false"
app:gpuimage_surface_type="texture_view" /> <!-- surface_view or texture_view -->
Java:
@Override
public void onCreate(final Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity);
Uri imageUri = ...;
gpuImageView = findViewById(R.id.gpuimageview);
gpuImageView.setImage(imageUri); // this loads image on the current thread, should be run in a thread
gpuImageView.setFilter(new GPUImageSepiaFilter());
// Later when image should be saved saved:
gpuImageView.saveToPictures("GPUImage", "ImageWithFilter.jpg", null);
}
Kotlin:
public override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_gallery)
val imageUri: Uri = ...
gpuImageView = findViewById<GPUImageView>(R.id.gpuimageview)
gpuImageView.setImage(imageUri) // this loads image on the current thread, should be run in a thread
gpuImageView.setFilter(GPUImageSepiaFilter())
// Later when image should be saved saved:
gpuImageView.saveToPictures("GPUImage", "ImageWithFilter.jpg", null)
}
Java:
public void onCreate(final Bundle savedInstanceState) {
public void onCreate(final Bundle savedInstanceState) {
Uri imageUri = ...;
gpuImage = new GPUImage(context);
gpuImage.setFilter(new GPUImageSobelEdgeDetection());
gpuImage.setImage(imageUri);
gpuImage.saveToPictures("GPUImage", "ImageWithFilter.jpg", null);
}
Kotlin:
public override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_gallery)
val imageUri: Uri = ...
gpuImage = GPUImage(this)
gpuImage.setFilter(GPUImageSepiaFilter())
gpuImage.setImage(imageUri)
gpuImage.saveToPictures("GPUImage", "ImageWithFilter.jpg", null)
}
Make sure that you run the clean target when using maven.
gradle clean assemble
Idea from: iOS GPUImage framework
Author: Cats-oss
Source Code: https://github.com/cats-oss/android-gpuimage
1676148660
Minecraft clone for Windows, Mac OS X and Linux. Just a few thousand lines of C using modern OpenGL (shaders). Online multiplayer support is included using a Python-based server.
Mac and Windows binaries are available on the website.
http://www.michaelfogleman.com/craft/
See below to run from source.
Download and install CMake if you don't already have it. You may use Homebrew to simplify the installation:
brew install cmake
sudo apt-get install cmake libglew-dev xorg-dev libcurl4-openssl-dev
sudo apt-get build-dep glfw
Download and install CMake and MinGW. Add C:\MinGW\bin
to your PATH
.
Download and install cURL so that CURL/lib and CURL/include are in your Program Files directory.
Use the following commands in place of the ones described in the next section.
cmake -G "MinGW Makefiles"
mingw32-make
Once you have the dependencies (see above), run the following commands in your terminal.
git clone https://github.com/fogleman/Craft.git
cd Craft
cmake .
make
./craft
After many years, craft.michaelfogleman.com has been taken down. See the Server section for info on self-hosting.
You can connect to a server with command line arguments...
./craft [HOST [PORT]]
Or, with the "/online" command in the game itself.
/online [HOST [PORT]]
You can run your own server or connect to mine. The server is written in Python but requires a compiled DLL so it can perform the terrain generation just like the client.
gcc -std=c99 -O3 -fPIC -shared -o world -I src -I deps/noise deps/noise/noise.c src/world.c
python server.py [HOST [PORT]]
/goto [NAME]
Teleport to another user. If NAME is unspecified, a random user is chosen.
/list
Display a list of connected users.
/login NAME
Switch to another registered username. The login server will be re-contacted. The username is case-sensitive.
/logout
Unauthenticate and become a guest user. Automatic logins will not occur again until the /login command is re-issued.
/offline [FILE]
Switch to offline mode. FILE specifies the save file to use and defaults to "craft".
/online HOST [PORT]
Connect to the specified server.
/pq P Q
Teleport to the specified chunk.
/spawn
Teleport back to the spawn point.
The terrain is generated using Simplex noise - a deterministic noise function seeded based on position. So the world will always be generated the same way in a given location.
The world is split up into 32x32 block chunks in the XZ plane (Y is up). This allows the world to be “infinite” (floating point precision is currently a problem at large X or Z values) and also makes it easier to manage the data. Only visible chunks need to be queried from the database.
Only exposed faces are rendered. This is an important optimization as the vast majority of blocks are either completely hidden or are only exposing one or two faces. Each chunk records a one-block width overlap for each neighboring chunk so it knows which blocks along its perimeter are exposed.
Only visible chunks are rendered. A naive frustum-culling approach is used to test if a chunk is in the camera’s view. If it is not, it is not rendered. This results in a pretty decent performance improvement as well.
Chunk buffers are completely regenerated when a block is changed in that chunk, instead of trying to update the VBO.
Text is rendered using a bitmap atlas. Each character is rendered onto two triangles forming a 2D rectangle.
“Modern” OpenGL is used - no deprecated, fixed-function pipeline functions are used. Vertex buffer objects are used for position, normal and texture coordinates. Vertex and fragment shaders are used for rendering. Matrix manipulation functions are in matrix.c for translation, rotation, perspective, orthographic, etc. matrices. The 3D models are made up of very simple primitives - mostly cubes and rectangles. These models are generated in code in cube.c.
Transparency in glass blocks and plants (plants don’t take up the full rectangular shape of their triangle primitives) is implemented by discarding magenta-colored pixels in the fragment shader.
User changes to the world are stored in a sqlite database. Only the delta is stored, so the default world is generated and then the user changes are applied on top when loading.
The main database table is named “block” and has columns p, q, x, y, z, w. (p, q) identifies the chunk, (x, y, z) identifies the block position and (w) identifies the block type. 0 represents an empty block (air).
In game, the chunks store their blocks in a hash map. An (x, y, z) key maps to a (w) value.
The y-position of blocks are limited to 0 <= y < 256. The upper limit is mainly an artificial limitation to prevent users from building unnecessarily tall structures. Users are not allowed to destroy blocks at y = 0 to avoid falling underneath the world.
Multiplayer mode is implemented using plain-old sockets. A simple, ASCII, line-based protocol is used. Each line is made up of a command code and zero or more comma-separated arguments. The client requests chunks from the server with a simple command: C,p,q,key. “C” means “Chunk” and (p, q) identifies the chunk. The key is used for caching - the server will only send block updates that have been performed since the client last asked for that chunk. Block updates (in realtime or as part of a chunk request) are sent to the client in the format: B,p,q,x,y,z,w. After sending all of the blocks for a requested chunk, the server will send an updated cache key in the format: K,p,q,key. The client will store this key and use it the next time it needs to ask for that chunk. Player positions are sent in the format: P,pid,x,y,z,rx,ry. The pid is the player ID and the rx and ry values indicate the player’s rotation in two different axes. The client interpolates player positions from the past two position updates for smoother animation. The client sends its position to the server at most every 0.1 seconds (less if not moving).
Client-side caching to the sqlite database can be performance intensive when connecting to a server for the first time. For this reason, sqlite writes are performed on a background thread. All writes occur in a transaction for performance. The transaction is committed every 5 seconds as opposed to some logical amount of work completed. A ring / circular buffer is used as a queue for what data is to be written to the database.
In multiplayer mode, players can observe one another in the main view or in a picture-in-picture view. Implementation of the PnP was surprisingly simple - just change the viewport and render the scene again from the other player’s point of view.
Hit testing (what block the user is pointing at) is implemented by scanning a ray from the player’s position outward, following their sight vector. This is not a precise method, so the step rate can be made smaller to be more accurate.
Collision testing simply adjusts the player’s position to remain a certain distance away from any adjacent blocks that are obstacles. (Clouds and plants are not marked as obstacles, so you pass right through them.)
A textured sky dome is used for the sky. The X-coordinate of the texture represents time of day. The Y-values map from the bottom of the sky sphere to the top of the sky sphere. The player is always in the center of the sphere. The fragment shaders for the blocks also sample the sky texture to determine the appropriate fog color to blend with based on the block’s position relative to the backing sky.
Ambient occlusion is implemented as described on this page:
http://0fps.wordpress.com/2013/07/03/ambient-occlusion-for-minecraft-like-worlds/
http://www.michaelfogleman.com/craft/
Author: Fogleman
Source Code: https://github.com/fogleman/Craft
License: MIT license
1676140500
openage: a volunteer project to create a free engine clone of the Genie Engine used by Age of Empires, Age of Empires II (HD) and Star Wars: Galactic Battlegrounds, comparable to projects like OpenMW, OpenRA, OpenSAGE, OpenTTD and OpenRCT2. At the moment we focus our efforts on the integration of Age of Empires II, while being primarily aimed at POSIX platforms such as GNU/Linux.
openage uses the original game assets (such as sounds and graphics), but (for obvious reasons) doesn't ship them. To play, you require an original AoE II: TC, AoE II: HD or AoE II: DE installation (via Wine or Steam-Linux).
Technology | Component |
---|---|
C++20 | Engine core |
Python3 | Scripting, media conversion, in-game console, code generation |
Qt6 | Graphical user interface |
Cython | Python/C++ Glue code |
CMake | Build system |
OpenGL3.3 | Rendering, shaders |
SDL2 | Cross-platform Audio/Input/Window handling |
Opus | Audio codec |
nyan | Content Configuration and Modding |
Humans | Mixing together all of the above |
But beware, for sanity reasons:
Important notice: At the moment, "gameplay" is basically non-functional. We're implementing the internal game simulation (how units even do anything) with simplicity and extensibility in mind, so we had to get rid of the temporary (but kind of working) previous version. With these changes we can (finally) actually make use of our converted asset packs and our nyan API! We're working day and night to make gameplay return*. If you're interested, we wrote detailed explanations on our blog: Part 1, Part 2.
* may not actually be every day and night
Operating System | Build status |
---|---|
Debian Sid | Todo: Kevin #11 |
Ubuntu 22.04 LTS | |
macOS | |
Windows Server 2019 | |
Windows Server 2022 |
Which platforms are supported?
What features are currently implemented?
What's the plan?
There's many missing parts for an actually working game. So if you "just wanna play", you'll be disappointed, unfortunately.
We strongly recommend to build the program from source to get the latest, greatest and shiniest project state :)
For Linux check at repology if your distribution has any packages available. Otherwise you need to build from source. We don't release *.deb
, *.rpm
, flatpack, snap or AppImage packages yet.
For Windows check our release page for the latest installer. Otherwise, you need to build from source.
For macOS we currently don't have any packages, you need to build from source.
If you need help, maybe our troubleshooting guide helps you.
How do I get this to run on my box?
I compiled everything. Now how do I run it?
./bin/run
.Waaaaaah! It
All of those are features, not bugs.
To turn them off, use ./bin/run --dont-segfault --no-errors --dont-eat-dog
.
If this still does not help, try our troubleshooting guide, the contact section or the bug tracker.
Contributing
You might ask yourself now "Yeah, this sounds cool and all, but how do I participate and get famous contribute useful features?".
Fortunately for you, there is a lot to do and we are very grateful for help.
Then openage might be a good reason to become one! We have many issues and tasks for beginners. You just have to ask and we'll find something. Alternatively, lurking is also allowed.
Cheers, happy hecking!
What does openage development look like in practice?
How can I help?
All documentation is also in this repo:
Contact | Where? |
---|---|
Issue Tracker | GitHub SFTtech/openage |
Development Blog | blog.openage.dev |
Subreddit | |
Discussions | GitHub Discussions |
Matrix Chat | #sfttech:matrix.org |
Money Sink |
Author: SFTtech
Source Code: https://github.com/SFTtech/openage
License: View license
1676132220
Filament is a real-time physically based rendering engine for Android, iOS, Linux, macOS, Windows, and WebGL. It is designed to be as small as possible and as efficient as possible on Android.
Android projects can simply declare Filament libraries as Maven dependencies:
repositories {
// ...
mavenCentral()
}
dependencies {
implementation 'com.google.android.filament:filament-android:1.31.3'
}
Here are all the libraries available in the group com.google.android.filament
:
iOS projects can use CocoaPods to install the latest release:
pod 'Filament', '~> 1.31.3'
If you prefer to live on the edge, you can download a continuous build by following the following steps:
matc
and how to write custom materials.Encodings
Primitive Types
Animation
Extensions
You must create an Engine
, a Renderer
and a SwapChain
. The SwapChain
is created from a native window pointer (an NSView
on macOS or a HWND
on Windows for instance):
Engine* engine = Engine::create();
SwapChain* swapChain = engine->createSwapChain(nativeWindow);
Renderer* renderer = engine->createRenderer();
To render a frame you must then create a View
, a Scene
and a Camera
:
Camera* camera = engine->createCamera(EntityManager::get().create());
View* view = engine->createView();
Scene* scene = engine->createScene();
view->setCamera(camera);
view->setScene(scene);
Renderables are added to the scene:
Entity renderable = EntityManager::get().create();
// build a quad
RenderableManager::Builder(1)
.boundingBox({{ -1, -1, -1 }, { 1, 1, 1 }})
.material(0, materialInstance)
.geometry(0, RenderableManager::PrimitiveType::TRIANGLES, vertexBuffer, indexBuffer, 0, 6)
.culling(false)
.build(*engine, renderable);
scene->addEntity(renderable);
The material instance is obtained from a material, itself loaded from a binary blob generated by matc
:
Material* material = Material::Builder()
.package((void*) BAKED_MATERIAL_PACKAGE, sizeof(BAKED_MATERIAL_PACKAGE))
.build(*engine);
MaterialInstance* materialInstance = material->createInstance();
To learn more about materials and matc
, please refer to the materials documentation.
To render, simply pass the View
to the Renderer
:
// beginFrame() returns false if we need to skip a frame
if (renderer->beginFrame(swapChain)) {
// for each View
renderer->render(view);
renderer->endFrame();
}
For complete examples of Linux, macOS and Windows Filament applications, look at the source files in the samples/
directory. These samples are all based on libs/filamentapp/
which contains the code that creates a native window with SDL2 and initializes the Filament engine, renderer and views.
For more information on how to prepare environment maps for image-based lighting please refer to BUILDING.md.
See android/samples
for examples of how to use Filament on Android.
You must always first initialize Filament by calling Filament.init()
.
Rendering with Filament on Android is similar to rendering from native code (the APIs are largely the same across languages). You can render into a Surface
by passing a Surface
to the createSwapChain
method. This allows you to render to a SurfaceTexture
, a TextureView
or a SurfaceView
. To make things easier we provide an Android specific API called UiHelper
in the package com.google.android.filament.android
. All you need to do is set a render callback on the helper and attach your SurfaceView
or TextureView
to it. You are still responsible for creating the swap chain in the onNativeWindowChanged()
callback.
Filament is supported on iOS 11.0 and above. See ios/samples
for examples of using Filament on iOS.
Filament on iOS is largely the same as native rendering with C++. A CAEAGLLayer
or CAMetalLayer
is passed to the createSwapChain
method. Filament for iOS supports both Metal (preferred) and OpenGL ES.
To get started you can use the textures and environment maps found respectively in third_party/textures
and third_party/environments
. These assets are under CC0 license. Please refer to their respective URL.txt
files to know more about the original authors.
Environments must be pre-processed using cmgen
or using the libiblprefilter
library.
Please read and follow the steps in CONTRIBUTING.md. Make sure you are familiar with the code style.
This repository not only contains the core Filament engine, but also its supporting libraries and tools.
android
: Android libraries and projectsfilamat-android
: Filament material generation library (AAR) for Androidfilament-android
: Filament library (AAR) for Androidfilament-utils-android
: Extra utilities (KTX loader, math types, etc.)gltfio-android
: Filament glTF loading library (AAR) for Androidsamples
: Android-specific Filament samplesart
: Source for various artworks (logos, PDF manuals, etc.)assets
: 3D assets to use with sample applicationsbuild
: CMake build scriptsdocs
: Documentationmath
: Mathematica notebooks used to explore BRDFs, equations, etc.filament
: Filament rendering engine (minimal dependencies)backend
: Rendering backends/drivers (Vulkan, Metal, OpenGL/ES)ide
: Configuration files for IDEs (CLion, etc.)ios
: Sample projects for iOSlibs
: Librariesbluegl
: OpenGL bindings for macOS, Linux and Windowsbluevk
: Vulkan bindings for macOS, Linux, Windows and Androidcamutils
: Camera manipulation utilitiesfilabridge
: Library shared by the Filament engine and host toolsfilaflat
: Serialization/deserialization library used for materialsfilagui
: Helper library for Dear ImGuifilamat
: Material generation libraryfilamentapp
: SDL2 skeleton to build sample appsfilameshio
: Tiny filamesh parsing library (see also tools/filamesh
)geometry
: Mesh-related utilitiesgltfio
: Loader for glTF 2.0ibl
: IBL generation toolsimage
: Image filtering and simple transformsimageio
: Image file reading / writing, only intended for internal usematdbg
: DebugServer for inspecting shaders at run-time (debug builds only)math
: Math librarymathio
: Math types support for output streamsutils
: Utility library (threads, memory, data structures, etc.)viewer
: glTF viewer library (requires gltfio)samples
: Sample desktop applicationsshaders
: Shaders used by filamat
and matc
third_party
: External libraries and assetsenvironments
: Environment maps under CC0 license that can be used with cmgen
models
: Models under permissive licensestextures
: Textures under CC0 licensetools
: Host toolscmgen
: Image-based lighting asset generatorfilamesh
: Mesh converterglslminifier
: Minifies GLSL source codematc
: Material compilermatinfo
Displays information about materials compiled with matc
mipgen
Generates a series of miplevels from a source imagenormal-blending
: Tool to blend normal mapsresgen
Aggregates binary blobs into embeddable resourcesroughness-prefilter
: Pre-filters a roughness map from a normal map to reduce aliasingspecular-color
: Computes the specular color of conductors based on spectral dataweb
: JavaScript bindings, documentation, and samplesDownload Filament releases to access stable builds. Filament release archives contains host-side tools that are required to generate assets.
Make sure you always use tools from the same release as the runtime library. This is particularly important for matc
(material compiler).
If you'd rather build Filament yourself, please refer to our build manual.
Author: Google
Source Code: https://github.com/google/filament
License: Apache-2.0 license
1676128277
Tiny Renderer or how OpenGL works: software rendering in 500 lines of code
git clone https://github.com/ssloy/tinyrenderer.git &&
cd tinyrenderer &&
mkdir build &&
cd build &&
cmake .. &&
cmake --build . -j &&
./tinyrenderer ../obj/diablo3_pose/diablo3_pose.obj ../obj/floor.obj
The rendered image is saved to framebuffer.tga
.
You can open the project in Gitpod, a free online dev evironment for GitHub:
On open, the editor will compile & run the program as well as open the resulting image in the editor's preview. Just change the code in the editor and rerun the script (use the terminal's history) to see updated images.
My source code is irrelevant. Read the wiki and implement your own renderer. Only when you suffer through all the tiny details you will learn what is going on.
In this series of articles, I want to show the way OpenGL works by writing its clone (a much simplified one). Surprisingly enough, I often meet people who cannot overcome the initial hurdle of learning OpenGL / DirectX. Thus, I have prepared a short series of lectures, after which my students show quite good renderers.
So, the task is formulated as follows: using no third-party libraries (especially graphic ones), get something like this picture:
Warning: this is a training material that will loosely repeat the structure of the OpenGL library. It will be a software renderer. I do not want to show how to write applications for OpenGL. I want to show how OpenGL works. I am deeply convinced that it is impossible to write efficient applications using 3D libraries without understanding this.
I will try to make the final code about 500 lines. My students need 10 to 20 programming hours to begin making such renderers. At the input, we get a test file with a polygonal wire + pictures with textures. At the output, we’ll get a rendered model. No graphical interface, the program simply generates an image.
Since the goal is to minimize external dependencies, I give my students just one class that allows working with TGA files. It’s one of the simplest formats that supports images in RGB/RGBA/black and white formats. So, as a starting point, we’ll obtain a simple way to work with pictures. You should note that the only functionality available at the very beginning (in addition to loading and saving images) is the capability to set the color of one pixel.
There are no functions for drawing line segments and triangles. We’ll have to do all of this by hand. I provide my source code that I write in parallel with students. But I would not recommend using it, as this doesn’t make sense. The entire code is available on github, and here you will find the source code I give to my students.
#include "tgaimage.h"
const TGAColor white = TGAColor(255, 255, 255, 255);
const TGAColor red = TGAColor(255, 0, 0, 255);
int main(int argc, char** argv) {
TGAImage image(100, 100, TGAImage::RGB);
image.set(52, 41, red);
image.write_tga_file("output.tga");`
return 0;
}
output.tga should look something like this:
Teaser: few examples made with the renderer
Check the wiki for the detailed lessons.
Author: ssloy
Source Code: https://github.com/ssloy/tinyrenderer
License: View license
1676124443
Packaging status in various repositories:
See the kitty website.
To ask other questions about kitty usage, use either the discussions on GitHub or the Reddit community
Author: Kovidgoyal
Source Code: https://github.com/kovidgoyal/kitty
License: GPL-3.0 license
1666990440
🎺🎺🎺🎺🎺🎺🎺🎺🎺🎺🎺 The Metal with Swift 5.0 version is comming 🎺🎺🎺🎺🎺🎺🎺🎺🎺🎺🎺
360 VR Player
Author: Hanton
Source Code: https://github.com/hanton/HTY360Player
License: MIT license
1666277100
This component implements transition animation to crumble view-controller into tiny pieces.
Check this project on dribbble.
Also, read how it was done in our blog
use_frameworks!
pod 'StarWars', '~> 2.0'
At first, import StarWars:
import StarWars
Then just implement class of UIViewControllerTransitioningDelegate that will return our animation form method animationControllerForDismissedController and assign it to transitioningDelegate of viewController that you want to dismiss.
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
let destination = segue.destinationViewController
destination.transitioningDelegate = self
}
func animationControllerForDismissedController(dismissed: UIViewController) -> UIViewControllerAnimatedTransitioning? {
return StarWarsGLAnimator()
}
There are also two things you can customize in the Star Wars animation: duration and sprite sizes. Let’s see how you can do this:
let animator = StarWarsGLAnimator()
animator.duration = 2
animator.spriteWidth = 8
Have fun! :)
We’d be really happy if you sent us links to your projects where you use our component. Just send an email to github@yalantis.com And do let us know if you have any questions or suggestion regarding the animation.
P.S. We’re going to publish more awesomeness wrapped in code and a tutorial on how to make UI for iOS (Android) better than better. Stay tuned!
1.0
Swift 2.02.0
Adds Swift 3.0 support3.0
Adds Swift 4.0 support4.0
Adds Swift 5.0 supportAuthor: Yalantis
Source Code: https://github.com/Yalantis/StarWars.iOS
License: MIT license
1665220860
Julia interface to GLFW 3, a multi-platform library for creating windows with OpenGL or OpenGL ES contexts and receiving many kinds of input. GLFW has native support for Windows, OS X and many Unix-like systems using the X Window System, such as Linux and FreeBSD.
using GLFW
# Create a window and its OpenGL context
window = GLFW.CreateWindow(640, 480, "GLFW.jl")
# Make the window's context current
GLFW.MakeContextCurrent(window)
# Loop until the user closes the window
while !GLFW.WindowShouldClose(window)
# Render here
# Swap front and back buffers
GLFW.SwapBuffers(window)
# Poll for and process events
GLFW.PollEvents()
end
GLFW.DestroyWindow(window)
Read the GLFW documentation for detailed instructions on how to use the library. The Julia interface is almost identical to the underlying C interface, with a few notable differences:
glfwGetClipboard
, glfwSetClipboard
) and time (glfwGetTime
, glfwSetTime
) functions have been omitted because Julia's standard library already supports similar functionality.glfwInit
and glfwTerminate
are called automatically using the __init__
and atexit
functions. While it is okay to still call them explicitly, it is redundant and not required.Author: JuliaGL
Source Code: https://github.com/JuliaGL/GLFW.jl
License: MIT license