1599415860
Following on from our recent blog post about SVG Sprites which gave an introduction and overview to using SVGs in a sprite, this post will outline the processes and tools we use for creating and using an SVG Sprite at Liquid Light.
Creating and maintaining large SVG sprites can be cumbersome and time consuming, so we decided to automate the process. Rather than managing a single large SVG sprite and tracking the coordinates of each icon individually, we wanted to be able to edit each icon and have the creation and co-ordinate generation automated.
In practice this means that we are able to put all our SVG icons into a folder and the SVG sprite (and PNG fallback for IE8) is created and optimised automatically along with a Sass map or names and co-ordinates. By using Sass mixins we are then able to include our sprites by using a very simple bit of code:
button {
&:before {
@include sprite(search);
content: '';
}
}
**All the code can be found in a repo over on **Github
To integrate SVG sprites into our workflow, we decided we wanted a task runner to create the sprite - this meant that we could individually create and update the individual icons without editing and updating the whole image. Gulp is the task runner of choice, running (amongst other things) gulp-svg-sprites. We also wanted the CSS to be created automatically - with the dimensions and background positions calculated upon creating. This gives the advantage of being able to alter an icons dimensions and the CSS updates to reflect this.
By default, the gulp-svg-sprites
plugin generates its own CSS, but typo3 has its own classes so we needed a way to create the dimensions and positions as variables, and allow us to use them on existing selectors. For this, we decided to turn to Sass.
Using Sass, the icons are stored in an array - or “map” (find out more about Sass maps). Using some custom mixins, we are able to call on any icon in the sprite and, upon compilation, output the dimensions and background position of each icon.
This blog post is not an introduction to Gulp or Sass (there are plenty of awesome ones around the web for that e.g. ones by Mark Goodyear, Sitepoint and Codefellows ) but rather a post detailing the specific workflow we have for creating and using SVG Sprites. It will run you through the gulp plugins, the gulp tasks we have set up and the specific mixins we use.
To run the gulp tasks we first need to install some packages from npm
. Run the command below to install the required packages (and gulp itself) and saves them to your package.json
.
<code class="language-git">$ npm install gulp gulp-size gulp-svg-sprite gulp-svg2png gulp-util --save-dev
Note: If you don’t already have a package.json
run npm init
to create one.
A quick run down of why each of the plugins are there
gulp-size
- This outputs the size of various files for the usergulp-svg-sprite
- this is the heavy lifter, creating the SVG sprite and CSSgulp-svg2png
- converts SVGs to PNG - We’ll be using this to make our PNG fallbackgulp-util
- Used for outputting coloured messages to the screenOnce installed, ensure you inlcude them at the top of the gulpfile.js.
<code class="language-javascript">var gulp = require('gulp');
var $ = {
gutil: require('gulp-util'),
svgSprite: require('gulp-svg-sprite'),
svg2png: require('gulp-svg2png'),
size: require('gulp-size'),
}
We declare them in a $
object to group them.
#sass #gulp #css #javascript
1651383480
This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.
Install
npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator
Usage
This plugin relies on your serverless yml file and on the serverless-offline
plugin.
plugins:
- serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
- serverless-appsync-simulator
- serverless-offline
Note: Order is important serverless-appsync-simulator
must go before serverless-offline
To start the simulator, run the following command:
sls offline start
You should see in the logs something like:
...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...
Configuration
Put options under custom.appsync-simulator
in your serverless.yml
file
| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789
| When using API_KEY
as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref
function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt
function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue
function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |
Example:
custom:
appsync-simulator:
location: '.webpack/service' # use webpack build directory
dynamoDb:
endpoint: 'http://my-custom-dynamo:8000'
Hot-reloading
By default, the simulator will hot-relad when changes to *.graphql
or *.vtl
files are detected. Changes to *.yml
files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.
Hot-reloading relies on watchman. Make sure it is installed on your system.
You can change the files being watched with the watch
option, which is then passed to watchman as the match expression.
e.g.
custom:
appsync-simulator:
watch:
- ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
- "*.graphql" # => string like this is equivalent to `["match", "*.graphql"]`
Or you can opt-out by leaving an empty array or set the option to false
Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.
Resource CloudFormation functions resolution
This plugin supports some resources resolution from the Ref
, Fn::GetAtt
and Fn::ImportValue
functions in your yaml file. It also supports some other Cfn functions such as Fn::Join
, Fb::Sub
, etc.
Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation
You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.
provider:
environment:
BUCKET_NAME:
Ref: MyBucket # resolves to `my-bucket-name`
resources:
Resources:
MyDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
...
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
...
# in your appsync config
dataSources:
- type: AMAZON_DYNAMODB
name: dynamosource
config:
tableName:
Ref: MyDbTable # resolves to `myTable`
Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.
In those cases, you can define (or override) those values using the refMap
, getAttMap
and importValueMap
options.
refMap
takes a mapping of resource name to value pairsgetAttMap
takes a mapping of resource name to attribute/values pairsimportValueMap
takes a mapping of import name to values pairsExample:
custom:
appsync-simulator:
refMap:
# Override `MyDbTable` resolution from the previous example.
MyDbTable: 'mock-myTable'
getAttMap:
# define ElasticSearchInstance DomainName
ElasticSearchInstance:
DomainEndpoint: 'localhost:9200'
importValueMap:
other-service-api-url: 'https://other.api.url.com/graphql'
# in your appsync config
dataSources:
- type: AMAZON_ELASTICSEARCH
name: elasticsource
config:
# endpoint resolves as 'http://localhost:9200'
endpoint:
Fn::Join:
- ''
- - https://
- Fn::GetAtt:
- ElasticSearchInstance
- DomainEndpoint
In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}
) in the import name.
This notation can be used with all mocks - refMap
, getAttMap
and importValueMap
provider:
environment:
FINISH_ACTIVITY_FUNCTION_ARN:
Fn::ImportValue: other-service-api-${self:provider.stage}-url
custom:
serverless-appsync-simulator:
importValueMap:
- key: other-service-api-${self:provider.stage}-url
value: 'https://other.api.url.com/graphql'
This plugin only tries to resolve the following parts of the yml tree:
provider.environment
functions[*].environment
custom.appSync
If you have the need of resolving others, feel free to open an issue and explain your use case.
For now, the supported resources to be automatically resovled by Ref:
are:
Feel free to open a PR or an issue to extend them as well.
External functions
When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".
custom:
appsync-simulator:
functions:
addUser:
url: http://localhost:3016/2015-03-31/functions/addUser/invocations
method: post
addPost:
url: https://jsonplaceholder.typicode.com/posts
method: post
Supported Resolver types
This plugin supports resolvers implemented by amplify-appsync-simulator
, as well as custom resolvers.
From Aws Amplify:
Implemented by this plugin
#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#set( $discard = $cols.add("$toSnake") )
#if( $util.isBoolean($ctx.args.input[$entry]) )
#if( $ctx.args.input[$entry] )
#set( $discard = $vals.add("1") )
#else
#set( $discard = $vals.add("0") )
#end
#else
#set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
#set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
#set( $colStr = "($colStr)" )
#end
{
"version": "2018-05-29",
"statements": ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM <name-of-table> ORDER BY id DESC LIMIT 1"]
}
#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $cur = $ctx.args.input[$entry] )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#if( $util.isBoolean($cur) )
#if( $cur )
#set ( $cur = "1" )
#else
#set ( $cur = "0" )
#end
#end
#if ( $util.isNullOrEmpty($update) )
#set($update = "$toSnake$equals'$cur'" )
#else
#set($update = "$update,$toSnake$equals'$cur'" )
#end
#end
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}
#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
#set ($index = $index + 1)
#if ( $column["typeName"] == "timestamptz" )
#set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
#set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
#set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
$util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
#set ( $s = $mapKey.split("_") )
#set ( $camelCase="" )
#set ( $isFirst=true )
#foreach($entry in $s)
#if ( $isFirst )
#set ( $first = $entry.substring(0,1) )
#else
#set ( $first = $entry.substring(0,1).toUpperCase() )
#end
#set ( $isFirst=false )
#set ( $stringLength = $entry.length() )
#set ( $remaining = $entry.substring(1, $stringLength) )
#set ( $camelCase = "$camelCase$first$remaining" )
#end
$util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)
Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.
Will be escaped properly: null
, true
, and false
values.
{
"version": "2018-05-29",
"statements": [
"UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
"SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
],
variableMap: {
":ID": $ctx.args.id,
## ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
}
}
Requires
Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator
License: MIT License
1625055931
Hire top Indian front end developers for mobile-first, pixel perfect, SEO friendly and highly optimized front end development. We are a 16+ years experienced company offering frontend development services including HTML / CSS development, theme development & headless front end development utilising JS technologies such as Angular, React & Vue.
All our front-end developers are the in-house staff. We don’t let our work to freelancers or outsource to sub-contractors. Also, we have a stringent hiring mechanism to hire the top Indian frontend coders.
For more info visit: https://www.valuecoders.com/hire-developers/hire-front-end-developers
#front end developer #hire frontend developer #front end development company #front end app development #hire front-end programmers #front end application development
1623064198
As someone from a non-tech background, you might not understand the complexities of front-end development. What we see on our mobile screens or PCs is a mere fragment of intricately woven code. But if you are looking forward to developing an application, you would have to dive in and know the scopes found in front-end development with the advent of new technologies, tools, and frameworks.
In this blog, we will help you understand the best practices of Front-end development and the burgeoning trends that would help you ensure the quality development of your digital products. Learn about the future of web development is here.
GUI Development Best Practices: UX And UI
Before you start the development work, it is essential to discuss the user experience and user interface of your product. The front-end of any software is the only thing that interacts with your users. Moreover, it is important that you make incredible contact with your users. It is not just about the smoothness; also about navigation; you have to make things as simple as possible for your users to interact with your product.
User Experience Vs. User Interface
Most people confuse user experience and user interface to be one and the same thing. But they cannot be more wrong. User experience and user interface work together; they are different components of your product’s front end? Here are a few things which they share and that differentiate them.
User Experience
Starting with UX, it is a term coined by Don Norman, and when he did that, he did not contextualize it to any kind of software product. It was used for multiple disciplines, including marketing, graphical & industrial design, interface, and engineering.
In software development, it focuses on building user-centric processes that optimize the user interaction with the product. The best practices of delivering a great user experience include; researching customer behavior, understanding the context in which the audience takes action, and creating a systematic vision for the target audience to reach its goal.Use your newfound knowledge to develop an actual graphic design. It needs to be analytical and action-provoking. A good UX designer would always understand the way a user interacts with your product.
User Interface
User experience helps you define the user interface design. It would include the components that make up the entire experience of the product. Additionally, it includes toggle, background, fonts, animation, and other graphical elements.
If the user experience is about how the user interacts with your products, the user interface is about giving them the channels to interact with your product. So, the best practices of creating a rewarding user interface are; following brand style guidelines, intuitive design, support for various screen sizes, and effective implementation.
Front-End Development Best Practices: Design To Development
Once you are done with the design part, it is time to dive into development. The process includes turning the graphical assets into a functioning product. There are various approaches that the software community uses, but the most rewarding one is object-driven design and development as it improves the user experience tenfold.
The object-driven approach allows you to design graphical assets that follow the same design and pattern. Also, it allows you to translate the components for faster delivery and a cohesive UX and UI experience across products and platforms.
The design to development process allows you to build interfaces that include layouts, colors, typography, spacing, and more. Front-end development teams are required to work according to the guidelines of the target platform, and they must focus on the UI and UX peculiarities of product development. It is likely that you may face some temporary technical challenges during development and implementation.
It is a trend to automate the front-end development of software with Zeplin or Avocode. The tools ensure access to the updated design, accurate specs and automatically generate the code snippet that allows faster delivery. Learn about the right process of web development here.
Here is a list of popular front-end development technologies
#front end web development #how to learn front end development #how to master front end development #how to practice front end development #is front end development easy
1599415860
Following on from our recent blog post about SVG Sprites which gave an introduction and overview to using SVGs in a sprite, this post will outline the processes and tools we use for creating and using an SVG Sprite at Liquid Light.
Creating and maintaining large SVG sprites can be cumbersome and time consuming, so we decided to automate the process. Rather than managing a single large SVG sprite and tracking the coordinates of each icon individually, we wanted to be able to edit each icon and have the creation and co-ordinate generation automated.
In practice this means that we are able to put all our SVG icons into a folder and the SVG sprite (and PNG fallback for IE8) is created and optimised automatically along with a Sass map or names and co-ordinates. By using Sass mixins we are then able to include our sprites by using a very simple bit of code:
button {
&:before {
@include sprite(search);
content: '';
}
}
**All the code can be found in a repo over on **Github
To integrate SVG sprites into our workflow, we decided we wanted a task runner to create the sprite - this meant that we could individually create and update the individual icons without editing and updating the whole image. Gulp is the task runner of choice, running (amongst other things) gulp-svg-sprites. We also wanted the CSS to be created automatically - with the dimensions and background positions calculated upon creating. This gives the advantage of being able to alter an icons dimensions and the CSS updates to reflect this.
By default, the gulp-svg-sprites
plugin generates its own CSS, but typo3 has its own classes so we needed a way to create the dimensions and positions as variables, and allow us to use them on existing selectors. For this, we decided to turn to Sass.
Using Sass, the icons are stored in an array - or “map” (find out more about Sass maps). Using some custom mixins, we are able to call on any icon in the sprite and, upon compilation, output the dimensions and background position of each icon.
This blog post is not an introduction to Gulp or Sass (there are plenty of awesome ones around the web for that e.g. ones by Mark Goodyear, Sitepoint and Codefellows ) but rather a post detailing the specific workflow we have for creating and using SVG Sprites. It will run you through the gulp plugins, the gulp tasks we have set up and the specific mixins we use.
To run the gulp tasks we first need to install some packages from npm
. Run the command below to install the required packages (and gulp itself) and saves them to your package.json
.
<code class="language-git">$ npm install gulp gulp-size gulp-svg-sprite gulp-svg2png gulp-util --save-dev
Note: If you don’t already have a package.json
run npm init
to create one.
A quick run down of why each of the plugins are there
gulp-size
- This outputs the size of various files for the usergulp-svg-sprite
- this is the heavy lifter, creating the SVG sprite and CSSgulp-svg2png
- converts SVGs to PNG - We’ll be using this to make our PNG fallbackgulp-util
- Used for outputting coloured messages to the screenOnce installed, ensure you inlcude them at the top of the gulpfile.js.
<code class="language-javascript">var gulp = require('gulp');
var $ = {
gutil: require('gulp-util'),
svgSprite: require('gulp-svg-sprite'),
svg2png: require('gulp-svg2png'),
size: require('gulp-size'),
}
We declare them in a $
object to group them.
#sass #gulp #css #javascript
1655630160
Install via pip:
$ pip install pytumblr
Install from source:
$ git clone https://github.com/tumblr/pytumblr.git
$ cd pytumblr
$ python setup.py install
A pytumblr.TumblrRestClient
is the object you'll make all of your calls to the Tumblr API through. Creating one is this easy:
client = pytumblr.TumblrRestClient(
'<consumer_key>',
'<consumer_secret>',
'<oauth_token>',
'<oauth_secret>',
)
client.info() # Grabs the current user information
Two easy ways to get your credentials to are:
interactive_console.py
tool (if you already have a consumer key & secret)client.info() # get information about the authenticating user
client.dashboard() # get the dashboard for the authenticating user
client.likes() # get the likes for the authenticating user
client.following() # get the blogs followed by the authenticating user
client.follow('codingjester.tumblr.com') # follow a blog
client.unfollow('codingjester.tumblr.com') # unfollow a blog
client.like(id, reblogkey) # like a post
client.unlike(id, reblogkey) # unlike a post
client.blog_info(blogName) # get information about a blog
client.posts(blogName, **params) # get posts for a blog
client.avatar(blogName) # get the avatar for a blog
client.blog_likes(blogName) # get the likes on a blog
client.followers(blogName) # get the followers of a blog
client.blog_following(blogName) # get the publicly exposed blogs that [blogName] follows
client.queue(blogName) # get the queue for a given blog
client.submission(blogName) # get the submissions for a given blog
Creating posts
PyTumblr lets you create all of the various types that Tumblr supports. When using these types there are a few defaults that are able to be used with any post type.
The default supported types are described below.
We'll show examples throughout of these default examples while showcasing all the specific post types.
Creating a photo post
Creating a photo post supports a bunch of different options plus the described default options * caption - a string, the user supplied caption * link - a string, the "click-through" url for the photo * source - a string, the url for the photo you want to use (use this or the data parameter) * data - a list or string, a list of filepaths or a single file path for multipart file upload
#Creates a photo post using a source URL
client.create_photo(blogName, state="published", tags=["testing", "ok"],
source="https://68.media.tumblr.com/b965fbb2e501610a29d80ffb6fb3e1ad/tumblr_n55vdeTse11rn1906o1_500.jpg")
#Creates a photo post using a local filepath
client.create_photo(blogName, state="queue", tags=["testing", "ok"],
tweet="Woah this is an incredible sweet post [URL]",
data="/Users/johnb/path/to/my/image.jpg")
#Creates a photoset post using several local filepaths
client.create_photo(blogName, state="draft", tags=["jb is cool"], format="markdown",
data=["/Users/johnb/path/to/my/image.jpg", "/Users/johnb/Pictures/kittens.jpg"],
caption="## Mega sweet kittens")
Creating a text post
Creating a text post supports the same options as default and just a two other parameters * title - a string, the optional title for the post. Supports markdown or html * body - a string, the body of the of the post. Supports markdown or html
#Creating a text post
client.create_text(blogName, state="published", slug="testing-text-posts", title="Testing", body="testing1 2 3 4")
Creating a quote post
Creating a quote post supports the same options as default and two other parameter * quote - a string, the full text of the qote. Supports markdown or html * source - a string, the cited source. HTML supported
#Creating a quote post
client.create_quote(blogName, state="queue", quote="I am the Walrus", source="Ringo")
Creating a link post
#Create a link post
client.create_link(blogName, title="I like to search things, you should too.", url="https://duckduckgo.com",
description="Search is pretty cool when a duck does it.")
Creating a chat post
Creating a chat post supports the same options as default and two other parameters * title - a string, the title of the chat post * conversation - a string, the text of the conversation/chat, with diablog labels (no html)
#Create a chat post
chat = """John: Testing can be fun!
Renee: Testing is tedious and so are you.
John: Aw.
"""
client.create_chat(blogName, title="Renee just doesn't understand.", conversation=chat, tags=["renee", "testing"])
Creating an audio post
Creating an audio post allows for all default options and a has 3 other parameters. The only thing to keep in mind while dealing with audio posts is to make sure that you use the external_url parameter or data. You cannot use both at the same time. * caption - a string, the caption for your post * external_url - a string, the url of the site that hosts the audio file * data - a string, the filepath of the audio file you want to upload to Tumblr
#Creating an audio file
client.create_audio(blogName, caption="Rock out.", data="/Users/johnb/Music/my/new/sweet/album.mp3")
#lets use soundcloud!
client.create_audio(blogName, caption="Mega rock out.", external_url="https://soundcloud.com/skrillex/sets/recess")
Creating a video post
Creating a video post allows for all default options and has three other options. Like the other post types, it has some restrictions. You cannot use the embed and data parameters at the same time. * caption - a string, the caption for your post * embed - a string, the HTML embed code for the video * data - a string, the path of the file you want to upload
#Creating an upload from YouTube
client.create_video(blogName, caption="Jon Snow. Mega ridiculous sword.",
embed="http://www.youtube.com/watch?v=40pUYLacrj4")
#Creating a video post from local file
client.create_video(blogName, caption="testing", data="/Users/johnb/testing/ok/blah.mov")
Editing a post
Updating a post requires you knowing what type a post you're updating. You'll be able to supply to the post any of the options given above for updates.
client.edit_post(blogName, id=post_id, type="text", title="Updated")
client.edit_post(blogName, id=post_id, type="photo", data="/Users/johnb/mega/awesome.jpg")
Reblogging a Post
Reblogging a post just requires knowing the post id and the reblog key, which is supplied in the JSON of any post object.
client.reblog(blogName, id=125356, reblog_key="reblog_key")
Deleting a post
Deleting just requires that you own the post and have the post id
client.delete_post(blogName, 123456) # Deletes your post :(
A note on tags: When passing tags, as params, please pass them as a list (not a comma-separated string):
client.create_text(blogName, tags=['hello', 'world'], ...)
Getting notes for a post
In order to get the notes for a post, you need to have the post id and the blog that it is on.
data = client.notes(blogName, id='123456')
The results include a timestamp you can use to make future calls.
data = client.notes(blogName, id='123456', before_timestamp=data["_links"]["next"]["query_params"]["before_timestamp"])
# get posts with a given tag
client.tagged(tag, **params)
This client comes with a nice interactive console to run you through the OAuth process, grab your tokens (and store them for future use).
You'll need pyyaml
installed to run it, but then it's just:
$ python interactive-console.py
and away you go! Tokens are stored in ~/.tumblr
and are also shared by other Tumblr API clients like the Ruby client.
The tests (and coverage reports) are run with nose, like this:
python setup.py test
Author: tumblr
Source Code: https://github.com/tumblr/pytumblr
License: Apache-2.0 license