1654071480
pyramid_sqlalchemy provides some basic glue to facilitate using SQLAlchemy with Pyramid.
SQLAlchemy relies on global state for a few things:
MetaData
instance which tracks all known SQL tables.Every application using SQLAlchemy must provides its own instance of these. This makes it hard create add-on packages that also use SQLAlchemy, since they either need to have their own SQLAlchemy state, which makes it hard to integrate them into your application, or they need to jump through multiple complex hoops to allow them share state with your application.
pyramid_sqlalchemy helps by providing a canonical location for the global SQLAlchemy state. In addition it provides a convenient way to configure SQLAlchemy in a Pyramid application.
from pyramid.config import Configurator
from pyramid_sqlalchemy import BaseObject
class MyModel(BaseObject):
__tablename__ = 'my_model'
...
def main():
config = Configurator()
# Configure SQLAlchemy using settings from the .ini file
config.include('pyramid_sqlalchemy')
...
return config.make_wsgi_app()
Author: wichert
Source Code: https://github.com/wichert/pyramid_sqlalchemy
License: View license
1653813840
Create a JSON-API (http://jsonapi.org/) standard API from a database using the sqlAlchemy ORM and pyramid framework.
The core idea behind pyramid-jsonapi is to create a working JSON-API automatically, starting from the sort of models.py
file shipped with a typical pyramid + sqlalchemy application.
Note
The default branch of pyramid_jsonapi is now the 2.2 branch.
Documentation is available at: https://colinhiggs.github.io/pyramid-jsonapi/
pip install -i pyramid_jsonapi
See :ref:`getting-started` for other installation options, including installing development versions.
Declare your models somewhere using sqlalchemy's :func:`sqlalchemy.ext.declarative.declarative_base`. In this documentation we assume that you have done so in a file called models.py
:
class Person(Base):
__tablename__ = 'people'
id = Column(BigInteger, primary_key=True, autoincrement=True)
name = Column(Text)
blogs = relationship('Blog', backref='owner')
posts = relationship('Post', backref='author')
# and the rest...
If you are happy with the defaults, you can get away with the following additions to the standard pyramid alchemy scaffold's top level __init__.py
:
import pyramid_jsonapi
from . import models # Your models module.
def main(global_config, **settings):
# The usual stuff from the pyramid alchemy setup.
config = Configurator(settings=settings)
# pyramid_jsonapi uses the renderer labelled 'json'. As usual, if you have
# any types to serialise that the default JSON renderer can't handle, you
# must alter it. For example:
#
#renderer = JSON(sort_keys=True)
#renderer.add_adapter(datetime.date, datetime_adapter)
#config.add_renderer('json', renderer)
# Instantiate a PyramidJSONAPI class instance.
pj = pyramid_jsonapi.PyramidJSONAPI(config, models)
# If you are using pyramid 1.7 or older, you will need to pass a third
# argument to the constructor: a callable which accepts a CollectionView
# instance as an argument and returns a sqlalchemy database session.
#
# For example:
# pj = pyramid_jsonapi.PyramidJSONAPI(
# config, models, lambda view: models.DBSession
# )
# Create the routes and views automagically:
pj.create_jsonapi_using_magic_and_pixie_dust()
# Routes and views are added imperatively, so no need for a scan - unless
# you have defined other routes and views declaratively.
return config.make_wsgi_app()
Or, without all the comments:
import pyramid_jsonapi
from . import models
def main(global_config, **settings):
config = Configurator(settings=settings)
pj = pyramid_jsonapi.PyramidJSONAPI(config, models)
pj.create_jsonapi_using_magic_and_pixie_dust()
return config.make_wsgi_app()
Yes, there really is a method called :func:`pyramid_jsonapi.PyramidJSONAPI.create_jsonapi_using_magic_and_pixie_dust`. No, you don't have to call it that. If you are feeling more sensible you can use the synonym :func:`pyramid_jsonapi.PyramidJSONAPI.create_jsonapi`.
Author: colinhiggs
Source Code: https://github.com/colinhiggs/pyramid-jsonapi
License: AGPL-3.0 license
1653446220
Ziggurat Foundations
High level mixins for adding authorization, resource ownership and permission management fast, simple and easy. In summary, Ziggurat Foundations is a set of framework agnostic set of SQLAalchemy classes, so it can be used with Flask, Pyramid or other popular frameworks. It is the perfect solution for handling complex login and user management systems, from e-commerce systems, to private intranets or large CMS systems. It can easily be extended to support any additional features you may need (explained further in the documentation)
Zigg has been used (at scale) for very large implementations (millions of real users) and has been extended for custom applications such as geo-location applications that rely on pin-point accuracy for a users location. Zigg has been designed to work for high end environments, where the user(s) are at the main focus of the application (for example Zigg could become the backbone for a social media style application).
The aim of this project is to supply set of generic models that cover the most common needs in application development when it comes to authorization - using flat and tree like data structures. We provide most commonly needed features in a "standard" application, but provide them as mixins as we understand that every implementation has its own use case and in doing so, extending the base models is very easy.
Zigg supplies extendable, robust and well tested models that include:
Zigg provides standard functions that let you:
Ziggurat Foundations is BSD Licensed
Local development using docker
docker-compose run --rm app bash
cd ../application;
To run sqlite tests:
tox
To run postgres tests:
DB_STRING="postgresql://test:test@db:5432/test" DB=postgres tox
To run mysql tests:
DB_STRING="mysql+mysqldb://test:test@db_mysql/test" DB=mysql tox
Author: ergo
Source Code: https://github.com/ergo/ziggurat_foundations
License: BSD-3-Clause license
1653395040
The Mixer is a helper to generate instances of Django or SQLAlchemy models. It's useful for testing and fixture replacement. Fast and convenient test-data generation.
Mixer supports:
Docs are available at https://mixer.readthedocs.org/. Pull requests with documentation enhancements and/or fixes are awesome and most welcome.
Описание на русском языке: http://klen.github.io/mixer.html
Important
From version 6.2 the Mixer library doesn't support Python 2. The latest version with python<3 support is mixer 6.1.3
Mixer should be installed using pip:
pip install mixer
By default Mixer tries to generate fake (human-friendly) data.
If you want to randomize the generated values initialize the Mixer
by manual: Mixer(fake=False)
By default Mixer saves the generated objects in a database. If you want to disable
this, initialize the Mixer by manual like Mixer(commit=False)
Quick example:
from mixer.backend.django import mixer
from customapp.models import User, UserMessage
# Generate a random user
user = mixer.blend(User)
# Generate an UserMessage
message = mixer.blend(UserMessage, user=user)
# Generate an UserMessage and an User. Set username for generated user to 'testname'.
message = mixer.blend(UserMessage, user__username='testname')
# Generate SomeModel from SomeApp and select FK or M2M values from db
some = mixer.blend('someapp.somemodel', somerelation=mixer.SELECT)
# Generate SomeModel from SomeApp and force a value of money field from default to random
some = mixer.blend('someapp.somemodel', money=mixer.RANDOM)
# Generate SomeModel from SomeApp and skip the generation of money field
some = mixer.blend('someapp.somemodel', money=mixer.SKIP)
# Generate 5 SomeModel's instances and take company field's values from custom generator
some_models = mixer.cycle(5).blend('somemodel', company=(name for name in company_names))
Quick example:
from mixer.backend.flask import mixer
from models import User, UserMessage
mixer.init_app(self.app)
# Generate a random user
user = mixer.blend(User)
# Generate an userMessage
message = mixer.blend(UserMessage, user=user)
# Generate an UserMessage and an User. Set username for generated user to 'testname'.
message = mixer.blend(UserMessage, user__username='testname')
# Generate SomeModel and select FK or M2M values from db
some = mixer.blend('project.models.SomeModel', somerelation=mixer.SELECT)
# Generate SomeModel from SomeApp and force a value of money field from default to random
some = mixer.blend('project.models.SomeModel', money=mixer.RANDOM)
# Generate SomeModel from SomeApp and skip the generation of money field
some = mixer.blend('project.models.SomeModel', money=mixer.SKIP)
# Generate 5 SomeModel's instances and take company field's values from custom generator
some_models = mixer.cycle(5).blend('project.models.SomeModel', company=(company for company in companies))
For support this scheme, just create your own mixer class, like this:
from mixer.backend.sqlalchemy import Mixer
class MyOwnMixer(Mixer):
def populate_target(self, values):
target = self.__scheme(**values)
return target
mixer = MyOwnMixer()
Example of initialization:
from mixer.backend.sqlalchemy import Mixer
ENGINE = create_engine('sqlite:///:memory:')
BASE = declarative_base()
SESSION = sessionmaker(bind=ENGINE)
mixer = Mixer(session=SESSION(), commit=True)
role = mixer.blend('package.models.Role')
Also, see Flask, Flask-SQLAlchemy.
Example usage:
from mixer.backend.mongoengine import mixer
class User(Document):
created_at = DateTimeField(default=datetime.datetime.now)
email = EmailField(required=True)
first_name = StringField(max_length=50)
last_name = StringField(max_length=50)
username = StringField(max_length=50)
class Post(Document):
title = StringField(max_length=120, required=True)
author = ReferenceField(User)
tags = ListField(StringField(max_length=30))
post = mixer.blend(Post, author__username='foo')
Example usage:
from mixer.backend.marshmallow import mixer
import marshmallow as ma
class User(ma.Schema):
created_at = ma.fields.DateTime(required=True)
email = ma.fields.Email(required=True)
first_name = ma.fields.String(required=True)
last_name = ma.fields.String(required=True)
username = ma.fields.String(required=True)
class Post(ma.Schema):
title = ma.fields.String(required=True)
author = ma.fields.Nested(User, required=True)
post = mixer.blend(Post, author__username='foo')
Quick example:
from mixer.main import mixer
class Test:
one = int
two = int
name = str
class Scheme:
name = str
money = int
male = bool
prop = Test
scheme = mixer.blend(Scheme, prop__one=1)
By default 'django', 'flask', 'mongoengine' backends tries to save objects in database. For preventing this behavior init mixer manually:
from mixer.backend.django import Mixer
mixer = Mixer(commit=False)
Or you can temporary switch context use the mixer as context manager:
from mixer.backend.django import mixer
# Will be save to db
user1 = mixer.blend('auth.user')
# Will not be save to db
with mixer.ctx(commit=False):
user2 = mixer.blend('auth.user')
The mixer allows you to define generators for fields by manually. Quick example:
from mixer.main import mixer
class Test:
id = int
name = str
mixer.register(Test,
name=lambda: 'John',
id=lambda: str(mixer.faker.small_positive_integer())
)
test = mixer.blend(Test)
test.name == 'John'
isinstance(test.id, str)
# You could pinned just a value to field
mixer.register(Test, name='Just John')
test = mixer.blend(Test)
test.name == 'Just John'
Also, you can make your own factory for field types:
from mixer.backend.django import Mixer, GenFactory
def get_func(*args, **kwargs):
return "Always same"
class MyFactory(GenFactory):
generators = {
models.CharField: get_func
}
mixer = Mixer(factory=MyFactory)
You can add middleware layers to process generation:
from mixer.backend.django import mixer
# Register middleware to model
@mixer.middleware('auth.user')
def encrypt_password(user):
user.set_password('test')
return user
You can add several middlewares. Each middleware should get one argument (generated value) and return them.
It's also possible to unregister a middleware:
mixer.unregister_middleware(encrypt_password)
By default mixer uses 'en' locale. You could switch mixer default locale by creating your own mixer:
from mixer.backend.django import Mixer
mixer = Mixer(locale='it')
mixer.faker.name() ## u'Acchisio Conte'
At any time you could switch mixer current locale:
mixer.faker.locale = 'cz'
mixer.faker.name() ## u'Miloslava Urbanov\xe1 CSc.'
mixer.faker.locale = 'en'
mixer.faker.name() ## u'John Black'
# Use the mixer context manager
mixer.faker.phone() ## u'1-438-238-1116'
with mixer.ctx(locale='fr'):
mixer.faker.phone() ## u'08 64 92 11 79'
mixer.faker.phone() ## u'1-438-238-1116'
If you have any suggestions, bug reports or annoyances please report them to the issue tracker at https://github.com/klen/mixer/issues
Development of mixer happens at Github: https://github.com/klen/mixer
Author: klen
Source Code: https://github.com/klen/mixer
License: View license
1653251160
Mypy plugin and stubs for SQLAlchemy
This package contains type stubs and a mypy plugin to provide more precise static types and type inference for SQLAlchemy framework. SQLAlchemy uses some Python "magic" that makes having precise types for some code patterns problematic. This is why we need to accompany the stubs with mypy plugins. The final goal is to be able to get precise types for most common patterns. Currently, basic operations with models are supported. A simple example:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
user = User(id=42, name=42) # Error: Incompatible type for "name" of "User"
# (got "int", expected "Optional[str]")
user.id # Inferred type is "int"
User.name # Inferred type is "Column[Optional[str]]"
Some auto-generated attributes are added to models. Simple relationships are supported but require models to be imported:
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from models.address import Address
...
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
address = relationship('Address') # OK, mypy understands string references.
The next step is to support precise types for table definitions (e.g. inferring Column[Optional[str]]
for users.c.name
, currently it is just Column[Any]
), and precise types for results of queries made using query()
and select()
.
Install latest published version as:
pip install -U sqlalchemy-stubs
Important: you need to enable the plugin in your mypy config file:
[mypy]
plugins = sqlmypy
To install the development version of the package:
git clone https://github.com/dropbox/sqlalchemy-stubs
cd sqlalchemy-stubs
pip install -U .
First, clone the repo and cd into it, like in Installation, then:
git submodule update --init --recursive
pip install -r dev-requirements.txt
Then, to run the tests, simply:
pytest
The package is currently in alpha stage. See issue tracker for bugs and missing features. If you want to contribute, a good place to start is help-wanted
label.
Currently, some basic use cases like inferring model field types are supported. The long term goal is to be able to infer types for more complex situations like correctly inferring columns in most compound queries.
External contributions to the project should be subject to Dropbox Contributor License Agreement (CLA).
Author: dropbox
Source Code: https://github.com/dropbox/sqlalchemy-stubs
License: Apache-2.0 license
1653099748
Library for implementing Modified Preorder Tree Traversal with your SQLAlchemy Models and working with trees of Model instances, like django-mptt. Docs http://sqlalchemy-mptt.readthedocs.io/
The nested set model is a particular technique for representing nested sets (also known as trees or hierarchies) in relational databases.
Install from github:
pip install git+http://github.com/uralbash/sqlalchemy_mptt.git
PyPi:
pip install sqlalchemy_mptt
Source:
pip install -e .
Add mixin to model
from sqlalchemy import Column, Integer, Boolean
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy_mptt.mixins import BaseNestedSets
Base = declarative_base()
class Tree(Base, BaseNestedSets):
__tablename__ = "tree"
id = Column(Integer, primary_key=True)
visible = Column(Boolean)
def __repr__(self):
return "<Node (%s)>" % self.id
Now you can add, move and delete obj!
node = Tree(parent_id=6)
session.add(node)
level Nested sets example
1 1(1)22
_______________|___________________
| | |
2 2(2)5 6(4)11 12(7)21
| ^ ^
3 3(3)4 7(5)8 9(6)10 13(8)16 17(10)20
| |
4 14(9)15 18(11)19
level Insert node with parent_id == 6
1 1(1)24
_______________|_________________
| | |
2 2(2)5 6(4)13 14(7)23
| ____|____ ___|____
| | | | |
3 3(3)4 7(5)8 9(6)12 15(8)18 19(10)22
| | |
4 10(23)11 16(9)17 20(11)21
node = session.query(Tree).filter(Tree.id == 4).one()
session.delete(node)
level Nested sets example
1 1(1)22
_______________|___________________
| | |
2 2(2)5 6(4)11 12(7)21
| ^ ^
3 3(3)4 7(5)8 9(6)10 13(8)16 17(10)20
| |
4 14(9)15 18(11)19
level Delete node == 4
1 1(1)16
_______________|_____
| |
2 2(2)5 6(7)15
| ^
3 3(3)4 7(8)10 11(10)14
| |
4 8(9)9 12(11)13
node = session.query(Tree).filter(Tree.id == 8).one()
node.parent_id = 5
session.add(node)
level Nested sets example
1 1(1)22
_______________|___________________
| | |
2 2(2)5 6(4)11 12(7)21
| ^ ^
3 3(3)4 7(5)8 9(6)10 13(8)16 17(10)20
| |
4 14(9)15 18(11)19
level Move 8 - > 5
1 1(1)22
_______________|__________________
| | |
2 2(2)5 6(4)15 16(7)21
| ^ |
3 3(3)4 7(5)12 13(6)14 17(10)20
| |
4 8(8)11 18(11)19
|
5 9(9)10
Nested sets multitree
Move inside
node = session.query(Tree).filter(Tree.id == 4).one()
node.move_inside("15")
4 -> 15
level Nested sets tree1
1 1(1)16
_______________|_____________________
| |
2 2(2)5 6(7)15
| ^
3 3(3)4 7(8)10 11(10)14
| |
4 8(9)9 12(11)13
level Nested sets tree2
1 1(12)28
________________|_______________________
| | |
2 2(13)5 6(15)17 18(18)27
| ^ ^
3 3(14)4 7(4)12 13(16)14 15(17)16 19(19)22 23(21)26
^ | |
4 8(5)9 10(6)11 20(20)21 24(22)25
Move after
node = session.query(Tree).filter(Tree.id == 8).one()
node.move_after("5")
level Nested sets example
1 1(1)22
_______________|___________________
| | |
2 2(2)5 6(4)11 12(7)21
| ^ ^
3 3(3)4 7(5)8 9(6)10 13(8)16 17(10)20
| |
4 14(9)15 18(11)19
level Move 8 after 5
1 1(1)22
_______________|__________________
| | |
2 2(2)5 6(4)15 16(7)21
| ^ |
3 3(3)4 7(5)8 9(8)12 13(6)14 17(10)20
| |
4 10(9)11 18(11)19
Move to top level
node = session.query(Tree).filter(Tree.id == 15).one()
node.move_after("1")
level tree_id = 1
1 1(1)22
_______________|___________________
| | |
2 2(2)5 6(4)11 12(7)21
| ^ ^
3 3(3)4 7(5)8 9(6)10 13(8)16 17(10)20
| |
4 14(9)15 18(11)19
level tree_id = 2
1 1(15)6
^
2 2(16)3 4(17)5
level tree_id = 3
1 1(12)16
_______________|
| |
2 2(13)5 6(18)15
| ^
3 3(14)4 7(19)10 11(21)14
| |
4 8(20)9 12(22)13
To report bugs, use the issue tracker.
We welcome any contribution: suggestions, ideas, commits with new futures, bug fixes, refactoring, docs, tests, translations, etc...
If you have question, contact me sacrud@uralbash.ru or #sacrud IRC channel.
Author: uralbash
Source Code: https://github.com/uralbash/sqlalchemy_mptt
License: MIT license
1653054606
ps_alchemy is extension for pyramid_sacrud which provides SQLAlchemy models.
Look how easy it is to use:
Base = declarative_base()
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
class PageTree(Base):
__tablename__ = 'pages'
id = Column(Integer, primary_key=True)
config.include('ps_alchemy')
config.include('pyramid_sacrud', route_prefix='admin')
config.registry.settings['pyramid_sacrud.models'] = ('webpages', [PageTree, ])
For more docs see http://ps-alchemy.rtfd.org
To report bugs, use the issue tracker
We welcome any contribution: suggestions, ideas, commits with new futures, bug fixes, refactoring, docs, tests, translations etc
If you have question, contact me sacrud@uralbash.ru or IRC channel #sacrud
Author: sacrud
Source Code: https://github.com/sacrud/ps_alchemy
License: MIT license
1650255067
The Python SQL Toolkit and Object Relational Mapper
SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL. SQLAlchemy provides a full suite of well known enterprise-level persistence patterns, designed for efficient and high-performing database access, adapted into a simple and Pythonic domain language.
Major SQLAlchemy features include:
SQLAlchemy's philosophy:
Latest documentation is at:
https://www.sqlalchemy.org/docs/
Full documentation for installation is at Installation.
Please refer to the SQLAlchemy Community Guide.
Above all, SQLAlchemy places great emphasis on polite, thoughtful, and constructive communication between users and developers. Please see our current Code of Conduct at Code of Conduct.
Download Details:
Author: sqlalchemy
Source Code: https://github.com/sqlalchemy/sqlalchemy
License: MIT
#sql #python #sqlalchemy #database
1650254917
Learn how to create a CRUD application as a RESTful API using Flask and SQLAlchemy, making a Bookshop web application as a demonstration in Python.
A CRUD app is a web app that allows you to create, read, update and delete things. It is a prevalent task in web development and very useful for learning how to build web apps.
In this tutorial, you will learn how to build a CRUD application in Flask, and the result will be a working backend for a bookshop web application. We will define services to handle the CRUD operations; GET, POST, PUT, and DELETE requests for the RESTful bookshop API.
Building a bookshop app is helpful for learning because it's a real example, not a toy project. The code will be based on Flask and Flask-SQLAlchemy extension.
Flask is a microframework for building web apps using Python. It is a very lightweight framework that is easy to learn and use. Being lightweight does not mean that Flask is not powerful. You can use Flask's extensions whenever you want to use something like the ORM (Object Relational Mapping) in your application. In this tutorial, I've used the Flask-SQLAlchemy extension to create a database and a table for storing books.
SQLAlchemy is a Python ORM (Object Relational Mapping) library that makes it easy to work with databases.
#python #flask #sqlalchemy #crud
1647421200
FastAPI-SQLAlchemy
FastAPI-SQLAlchemy provides a simple integration between FastAPI and SQLAlchemy in your application. It gives access to useful helpers to facilitate the completion of common tasks.
Install and update using pip:
$ pip install fastapi-sqlalchemy
from fastapi import FastAPI
from fastapi_sqlalchemy import DBSessionMiddleware # middleware helper
from fastapi_sqlalchemy import db # an object to provide global access to a database session
from app.models import User
app = FastAPI()
app.add_middleware(DBSessionMiddleware, db_url="sqlite://")
# once the middleware is applied, any route can then access the database session
# from the global ``db``
@app.get("/users")
def get_users():
users = db.session.query(User).all()
return users
Note that the session object provided by db.session
is based on the Python3.7+ ContextVar
. This means that each session is linked to the individual request context in which it was created.
Sometimes it is useful to be able to access the database outside the context of a request, such as in scheduled tasks which run in the background:
import pytz
from apscheduler.schedulers.asyncio import AsyncIOScheduler # other schedulers are available
from fastapi import FastAPI
from fastapi_sqlalchemy import db
from app.models import User, UserCount
app = FastAPI()
app.add_middleware(DBSessionMiddleware, db_url="sqlite://")
@app.on_event('startup')
async def startup_event():
scheduler = AsyncIOScheduler(timezone=pytz.utc)
scheduler.start()
scheduler.add_job(count_users_task, "cron", hour=0) # runs every night at midnight
def count_users_task():
"""Count the number of users in the database and save it into the user_counts table."""
# we are outside of a request context, therefore we cannot rely on ``DBSessionMiddleware``
# to create a database session for us. Instead, we can use the same ``db`` object and
# use it as a context manager, like so:
with db():
user_count = db.session.query(User).count()
db.session.add(UserCount(user_count))
db.session.commit()
# no longer able to access a database session once the db() context manager has ended
return users
Author: mfreeborn
Source Code: https://github.com/mfreeborn/fastapi-sqlalchemy
License: MIT License
1647356400
A cookiecutter template for bootstrapping a FastAPI and React project using a modern stack.
It is often laborsome to start a new project. 90% of the time you have to decide how to handle authentication, reverse proxies, docker containers, testing, server-side validation, linting, etc. before you can even get started.
FastAPI-React serves to streamline and give you that functionality out of the box.
It is meant as a lightweight/React alternative to FastAPI's official fullstack project. If you want a more comprehensive project in Vue, I would suggest you start there. A lot of the backend code is taken from that project or the FastAPI official docs.
First, install cookiecutter if you don't already have it:
pip3 install cookiecutter
Second, install docker-compose if you don't already have it:
docker-compose installation official docs.
Then, in the directory you want your project to live:
cookiecutter gh:Buuntu/fastapi-react
You will need to put in a few variables and it will create a project directory (called whatever you set for project_slug
).
Input Variables
Change into your project directory and run:
chmod +x scripts/build.sh
./scripts/build.sh
This will build and run the docker containers, run the alembic migrations, and load the initial data (a test user).
It may take a while to build the first time it's run since it needs to fetch all the docker images.
Once you've built the images once, you can simply use regular docker-compose
commands to manage your development environment, for example to start your containers:
docker-compose up -d
Once this finishes you can navigate to the port set during setup (default is localhost:8000
), you should see the slightly modified create-react-app page:
Note: If you see an Nginx error at first with a 502: Bad Gateway
page, you may have to wait for webpack to build the development server (the nginx container builds much more quickly).
Login screen:
The backend docs will be at http://localhost:8000/api/docs
.
This project uses react-admin for a highly configurable admin dashboard.
After starting the project, navigate to http://localhost:8000/admin
. You should see a login screen. Use the username/password you set for the superuser on project setup.
NOTE: regular users will not be able to access the admin dashboard
You should now see a list of users which you can edit, add, and delete. The table is configured with the REST endpoints to the FastAPI /users
routes in the backend.
The admin dashboard is kept in the frontend/src/admin
directory to keep it separate from the regular frontend.
To generate a secure key used for encrypting/decrypting the JSON Web Tokens, you can run this command:
openssl rand -hex 32
The default is fine for development but you will want something more secure for production.
You can either set this on project setup as secret_key
or manually edit the Python SECRET_KEY
variable in backend/app/core/security.py
.
This project comes with Pytest and a few Pytest fixtures for easier mocking. The fixtures are all located in backend/conftest.py
within your project directory.
All tests are configured to run on a test database using SQLAlchemy transactions to reset the testing state on each function. This is to avoid a database call affecting the state of a different test.
These fixtures are included in backend/conftest.py
and are automatically imported into any test files that being with test_
.
The test_db
fixture is an empty test database and an instance of a SQLAlchemy Session class.
def test_user(test_db):
assert test_db.query(models.User).all()
def test_user_exists(test_user):
assert test_user.email == "admin@example.com"
def test_superuser(client, test_superuser):
assert test_superuser.is_superuser
To use an unauthenticated test client, use client
:
def test_get_users(client):
client.get("/api/v1/users")
assert response.status_code == 200
If you need an authenticated client using OAuth2 and JWTs:
def test_user_me(client, user_token_headers):
response = client.get(
"/api/v1/users/me",
headers=user_token_headers,
)
assert response.status_code == 200
Since OAuth2 expects the access token in the headers, you will need to pass in user_token_headers
as the headers
argument in any client request that requires authentication.
def test_user_me(client, superuser_token_headers):
response = client.get(
"/api/v1/users",
headers=superuser_token_headers,
)
assert response.status_code == 200
This template comes with Celery and Redis Docker containers pre-configured for you. For any long running processes, it's recommended that you handle these using a task queue like Celery to avoid making the client wait for a request to finish. Some examples of this might be sending emails, uploading large files, or any long running, resource intensive tasks.
There is an example task in backend/app/tasks.py
and an example Celery test in backend/app/tests/test_tasks.py
. This test runs synchronously, which is what Celery docs recommend.
If you are not happy with Celery or Redis, it should be easy to swap these containers out with your favorite tools. Some suggested alternatives might be Huey as the task queue and RabbitMQ for the message broker.
You can monitor tasks using Flower by going to http://localhost:5555
There are a few helper methods to handle authentication in frontend/src/utils
. These store and access the JWT returned by FastAPI in local storage. Even though this doesn't add any security, we prevent loading routes that might be protected on the frontend, which results in a better UX experience.
// in src/utils/auth.ts
/**
* Handles authentication with backend and stores in JWT in local storage
**/
const login = (email: string, password: string) => boolean;
// in src/utils/auth.ts
// clears token from local storage
const logout = (email: string, password: string) => void;
// Checks authenticated state from JWT tokens
const isAuthenticated = () => boolean;
Some basic routes are included (and handled in frontend/Routes.tsx
).
/login
- Login screen/logout
- Logout/
- Home/protected
- Example of protected routeThis handles routes that require authentication. It will automatically check whether the correct token with the "user" permissions is present or redirect to the home page.
// in src/Routes.tsx
import { Switch } from 'react-router-dom';
// Replace this with your component
import { ProtectedComponent } from 'components';
const Routes = () => (
<Switch>
<PrivateRoute path="/protected_route" component={ProtectedComponent} />
</Switch>
);
This stack can be adjusted and used with several deployment options that are compatible with Docker Compose, but it may be easiest to use Docker in Swarm Mode with an Nginx main load balancer proxy handling automatic HTTPS certificates, using the ideas from DockerSwarm.rocks.
Please refer to DockerSwarm.rocks to see how to deploy such a cluster easily. You will have to change the Traefik examples to Nginx or update your docker-compose file.
Author: Buuntu
Source Code: https://github.com/Buuntu/fastapi-react
License: MIT License
1647237989
SQLAdmin is a flexible Admin interface for SQLAlchemy models.
Main features include:
$ pip install sqladmin
Let's define an example SQLAlchemy model:
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
engine = create_engine(
"sqlite:///example.db",
connect_args={"check_same_thread": False},
)
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True)
name = Column(String)
Base.metadata.create_all(engine) # Create tables
If you want to use SQLAdmin
with FastAPI
:
from fastapi import FastAPI
from sqladmin import Admin, ModelAdmin
app = FastAPI()
admin = Admin(app, engine)
class UserAdmin(ModelAdmin, model=User):
column_list = [User.id, User.name]
admin.register_model(UserAdmin)
Or if you want to use SQLAdmin
with Starlette
:
from sqladmin import Admin, ModelAdmin
from starlette.applications import Starlette
app = Starlette()
admin = Admin(app, engine)
class UserAdmin(ModelAdmin, model=User):
column_list = [User.id, User.name]
admin.register_model(UserAdmin)
Now visiting /admin
on your browser you can see the SQLAdmin
interface.
TortoiseORM
.orm
package.Download Details:
Author: aminalaee
Source Code: https://github.com/aminalaee/sqladmin
License: BSD-3-Clause License
#python #sqlalchemy #fastapi
1647187200
SAFRS exposes SQLAlchemy database models as a JSON:API webservice and generates the corresponding swagger/OpenAPI.
Documentation can be found in the wiki.
A LIVE DEMO is available, where much of the basic functionality is implemented using a simple example.
SAFRS can be installed as a pip package or by downloading the latest version from github, for example:
git clone https://github.com/thomaxxl/safrs
cd safrs
pip install .
Once the dependencies are installed, the examples can be started, for example
python examples/demo_relationship.py "your-interface-ip"
Exposed resource objects can be queried using the JSON:API format. The API supports following HTTP operations:
Please check the JSON:API spec for more implementation details. You can also try out the interface in the live demo.
Database objects are implemented as subclasses of the SAFRSBase and SQLAlchemy model classes. The SQLAlchemy columns are serialized to JSON when the corresponding REST API is invoked.
Following example app illustrates how the API is built and documented:
class User(SAFRSBase, db.Model):
"""
description: User description
"""
__tablename__ = "Users"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String)
email = db.Column(db.String)
The User class is implemented as a subclass of
This User object is then exposed through the web interface using the Api object
api.expose_object(User)
The User object REST methods are available on /User, the swagger schema is available on /api/swagger.json and the UI is available on /api/:
Database object such as the User class from the demo.py example can be extended to include relationships with other objects. The demo_relationship.py contains following extension of the User class where a relationship with the Book class is implemented:
class User(SAFRSBase, db.Model):
'''
description: User description
'''
__tablename__ = 'Users'
id = db.Column(db.String, primary_key=True)
name = db.Column(db.String, default='')
email = db.Column(db.String, default='')
books = db.relationship('Book', back_populates="user")
...
A many-to-one database association is declared by the back_populates relationship argument. The Book class is simply another subclass of SAFRSBase and db.Model, similar to the previous User class:
class Book(SAFRSBase, db.Model):
'''
description: Book description
'''
__tablename__ = 'Books'
id = db.Column(db.String, primary_key=True)
name = db.Column(db.String, default='')
user_id = db.Column(db.String, db.ForeignKey('Users.id'))
user = db.relationship('User', back_populates='books')
The User.book relationship can be queried in the API through the following endpoints:
The relationship API endpoints work similarly for one-to-many relationships.
Relationship members can also be included in the response when querying an instance, by specifying the relationship names as a comma separated list in the include
query argument.
For example, to retrieve all items in the books_read
relationship from the People endpoint, you may add the include=books_read
url parameter
http://thomaxxl.pythonanywhere.com/api/People/?include=books_read
To retrieve nested relationship items, you can specify the nested relationship name after the '.', to retrieve the authors of the books_read instances for instance, you can use
http://thomaxxl.pythonanywhere.com/api/People/?include=books_read.author
Safrs allows the user to implement custom methods on the exposed objects. This methods can be invoked through the json API by sending an HTTP POST request to the method endpoint The following example implements a "send_mail" method fro example:
class User(SAFRSBase, db.Model):
'''
description: User description
'''
__tablename__ = 'Users'
id = Column(String, primary_key=True)
name = Column(String, default='')
email = Column(String, default='')
# Following method is exposed through the REST API
# This means it can be invoked with a HTTP POST
@jsonapi_rpc(http_methods=['POST','GET'])
def send_mail(self, email):
'''
description : Send an email
args:
email:
type : string
example : test email
'''
content = 'Mail to {} : {}\n'.format(self.name, email)
return { 'result' : 'sent {}'.format(content)}
This method shows up in the swagger interface:
The send_mail
method is documented with the jsonapi_rpc
decorator. This decorator generates a schema based on the function documentation. This documentation contains yaml specification of the API which is used by the swagger UI.
api_methods.py contains a couple of methods that can be used in your models.
The yaml specification has to be in the first part of the function and class comments. These parts are delimited by four dashes ("----") . The rest of the comment may contain additional documentation.
Two class-level methods have been defined to facilitate object retrieval:
lookup : retrieve a list of objects that match the argument list. For example, following HTTP POST request to a container will retrieve a list of itemswhere the name is "thomas"
{
"method": "lookup",
"args": {
"name": "thomas"
}
}
get_list : retrieve a list of the items with the specified ID's
The API can be initialized like this:
api = SAFRSAPI(app, host=HOST, port=PORT, prefix=API_PREFIX)
Then you can expose objects with expose_object
api.expose_object(User)
An example that uses the flask app factory pattern is implement in examples/mini_app.py
As can be seen in the swagger UI:
Some configuration parameters can be set in config.py:
jsonapi_rpc
functionalitySafrs allows you to Expose existing databases as jsona:api services with the expose_existing.py script, for example:
python3 expose_existing.py mysql+pymysql://root:password@localhost/sakila --host localhost
This script uses sqlacodegen to generate a source file containing the SQLAlchemy and SAFRSBase
database models and starts the API webservice.
More details here. This approach is used by the ApiLogicServer project.
The examples folder contains more example scripts:
A docker image can be found here: https://github.com/thomaxxl/safrs-example
The swagger shows the jsonapi filters that can be used in the url query arguments. Items with an exact match of the specified attribute value can be fetched by specifying the corresponding key-value query parameter. For example, suppose the User
class, exposed at /Users
has a name
attribute, to retrieve all instances with the name "John", you can use a GET
request to /Users?filter[name]=John
.
It is also possible to use more generic filters by specifiying a JSON string, for example filter=[{"name":"timestamp","op":"gt","val":"2020-08-01"},{"name":"timestamp","op":"lt","val":"2020-08-02"}]
.
More info can be found in the wiki.
Serialization and deserialization are implemented by the SAFRSBase to_dict
and __init__
: you can extend these methods as usual. For example, if you would like to add some attributes to the json payload of the User object, you can override the to_dict method:
class User(SAFRSBase, db.Model):
'''
description: User description
'''
__tablename__ = 'Users'
id = db.Column(db.String, primary_key=True)
name = db.Column(db.String, default='')
email = db.Column(db.String, default='')
books = db.relationship('Book', back_populates="user")
def to_dict(self):
result = SAFRSBase.to_dict(self)
result['custom_field'] = 'custom'
return result
This will add the custom_field
attribute to the result attributes:
"attributes": {
"custom_field": "custom",
"email": "reader_email0",
"name": "Reader 0"
}
It is possible to specify attributes and relationships that should not be serialized by specifying the respective exclude_attrs
and èxclude_rels` class attributes in your SAFRSBase instances. Examples can be found here and here
It is possible to limit the HTTP methods that are allowed by overriding the http_methods
class attribute. An example can be found here
The decorators
class attribute list can be used to add custom decorators to the HTTP endpoints. An example of this functionality is implemented in the authentication examples.
Some additional API RPC methods are implemented in api_methods.py, e.g. mysql regex search.
The swagger schema can be merged with a modified schema dictionary by supplying the to-be-merged dictionary as the custom_swagger
argument to SAFRSAPI
, e.g.
custom_swagger = {"info": {"title" : "New Title" }} # Customized swagger title will be merged
api = SAFRSAPI(app, host=swagger_host, port=PORT, prefix=OAS_PREFIX, api_spec_url=OAS_PREFIX+'/swagger',
custom_swagger=custom_swagger, schemes=['http', 'https'], description=description)
You can implement a serializable class without a model but this requires some extra work because safrs needs to know which attributes and relationships to serialize. An example is implemented here
The documentation is being moved to the wiki
AboutSAFRS is an acronym for **S**ql**A**lchemy **F**lask-**R**estful **S**wagger. The purpose of this framework is to help python developers create a self-documenting JSON API for sqlalchemy database objects and relationships. These objects can be serialized to JSON and can be created, retrieved, updated and deleted through the JSON API. Optionally, custom resource object methods can be exposed and invoked using JSON. Class and method descriptions and examples can be provided in yaml syntax in the code comments. The description is parsed and shown in the swagger web interface.
The result is an easy-to-use swagger/OpenAPI and JSON:API compliant API implementation.
limitations & TodosThis code was developed for a specific use-case and may not be flexible enough for everyone's needs. A lot of the functionality is available but not documented for the sake of brevity. Performance is reasonable for regular databases, but once you start exposing really big tables you may run into problems, for example: the `count()` for mysql innodb is slow on large(1M rows) tables, a workaround can be implemented by querying the `sys` tables or using werkzeug caching. Feel free to open an issue or drop [me](mailto:thomas.pollet+no+spam+@gmail.com) an email if you run into problems or something isn't clear!References
ThanksI developed this code when I worked at [Excellium Services](https://www.excellium-services.com/). They allowed me to open source it when I stopped working there.
Author: thomaxxl
Source Code: https://github.com/thomaxxl/safrs
License: GPL-3.0 License
1647093600
Flask-Migrate is an extension that handles SQLAlchemy database migrations for Flask applications using Alembic. The database operations are provided as command-line arguments under the flask db
command.
Install Flask-Migrate with pip
:
pip install Flask-Migrate
This is an example application that handles database migrations through Flask-Migrate:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///app.db'
db = SQLAlchemy(app)
migrate = Migrate(app, db)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(128))
With the above application you can create the database or enable migrations if the database already exists with the following command:
$ flask db init
Note that the FLASK_APP
environment variable must be set according to the Flask documentation for this command to work. This will add a migrations
folder to your application. The contents of this folder need to be added to version control along with your other source files.
You can then generate an initial migration:
$ flask db migrate
The migration script needs to be reviewed and edited, as Alembic currently does not detect every change you make to your models. In particular, Alembic is currently unable to detect indexes. Once finalized, the migration script also needs to be added to version control.
Then you can apply the migration to the database:
$ flask db upgrade
Then each time the database models change repeat the migrate
and upgrade
commands.
To sync the database in another system just refresh the migrations
folder from source control and run the upgrade
command.
To see all the commands that are available run this command:
$ flask db --help
Author: miguelgrinberg
Source Code: https://github.com/miguelgrinberg/Flask-Migrate
License: MIT License
1647086400
Flask-SQLAlchemy
Flask-SQLAlchemy is an extension for Flask that adds support for SQLAlchemy to your application. It aims to simplify using SQLAlchemy with Flask by providing useful defaults and extra helpers that make it easier to accomplish common tasks.
Install and update using pip:
$ pip install -U Flask-SQLAlchemy
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///example.sqlite"
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String, unique=True, nullable=False)
email = db.Column(db.String, unique=True, nullable=False)
db.session.add(User(username="Flask", email="example@example.com"))
db.session.commit()
users = User.query.all()
For guidance on setting up a development environment and how to make a contribution to Flask-SQLAlchemy, see the contributing guidelines.
The Pallets organization develops and supports Flask-SQLAlchemy and other popular packages. In order to grow the community of contributors and users, and allow the maintainers to devote more time to the projects, please donate today.
Author: pallets
Source Code: https://github.com/pallets/flask-sqlalchemy
License: BSD-3-Clause License