1567497157
Originally published by Justin Albano at https://dzone.com
In the modern world of interconnected software, web applications have become an indispensable asset. Foremost among these web applications is the Representational State Transfer (REST) web service, with Java becoming one of the most popular implementation languages. Within the Java REST ecosystem, there are two popular contenders: Java Enterprise Edition (JavaEE) and Spring. While both have their strengths and weaknesses, this article will focus on Spring and create a simple order management RESTful web application using Spring 4. Although this management system will be simple compared to the large-scale RESTful services found today, it will nonetheless demonstrate the basic thought process, design decisions, and implementation tests required to create a Level 3 (hypermedia-driven) Spring REST web service.
By the end of this article, we will have created a fully functional Spring REST order management system. While the source code illustrated in this article covers the essential aspects of the order management system, there are other components and code (such as test cases) that support the main service that are not shown. All of the source code, including these supporting aspects, can be found in the following GitHub repository:
The source code snippets in this article are not in-and-of-themselves sufficient for creating a fully functioning REST web service. Instead, they serve as a snapshot or reflection of the source code contained in the above repository. Therefore, as we walk through each step in creating our REST service, the source code in the above repository should be visited consistently and used as the authoritative reference for all design and implementation choices. For the remainder of this article, when we refer to our order management system, we are actually referring to the Spring REST service contained in the above repository.
It is expected that the reader has at least a novice understanding of dependency injection (DI), particularly DI using the Spring framework. Although we will explore the DI framework configurations used and the DI components utilized in our order management system, it is assumed that the reader has at least a conceptual understanding of the need for and premise of DI. For more information on DI in Spring, see the Spring Framework Guide
This article also assumes that the reader has a foundational understanding of REST and RESTful web services. While we will deep dive into the design and implementation intricacies of creating a REST web service in Spring, we will not focus on the conceptual aspects of REST (such as the use of an HTTP GET or POST call).
Our order management system was created using Test Driven Development (TDD), where tests were created first and each design decision and implemented component was focused on passing the created test cases. This not only resulted in a simple set of classes, but a more easily distinguishable set of components. For example, persistence logic and domain logic are not intertwined. Apart from the process used to create the service, there are also numerous tools used to build, test, and deploy the system, including:
Although we are using a wide array of frameworks and tools, each has a very important task when building our web service. Before we jump into the implementation, though, we first need to devise a design for our order management system.
The first step to designing our web service is deciding on what we want the service to accomplish. Since we are required to process orders, the definition of an order (i.e. creating a domain model) is a good place to start. Each order that we process should have an identifier (ID) so that we can uniquely point to it, a description that describes the order in a human-readable form, a cost, and a completion status.
Note that the cost is not a trivial issue to deal with in software: It is very easy to lose track of cents and make simple rounding errors. In order to avoid these subliminal issues, we will simply store the cost for our order in cents. This allows us to remove the decimal place and perform simple arithmetic without worrying that we will lose a penny in the mix.
Using this simple definition of an order, we devise the following domain model:
With our order designed, we can move onto designing how we will expose our orders through our RESTful web service. Using the standard HTTP verbs, we can quickly come up with a set of REST endpoints that cover the usual Create, Read, Update, and Delete (CRUD) operations:
It is important to note that we should not simply enumerate the REST endpoints we intended to create, but also include the expected behavior if the endpoint successfully completes the request or if it fails to do so. For example, if a client requests to update a non-existent order, we should return a 404
error to inform the client that this resource does not exist. If we successfully update the resource, we should return a 200
status to inform the client that its request was successfully completed.
At this point, it is also useful to think about what the response bodies for the various REST endpoints will look like. Due to its simplicity, we will consume Javascript Object Notation (JSON) objects in the request bodies we receive and produce JSON objects in the response bodies we send, respectively. For example, if we follow our domain model, the response body for getting an order with a specified ID would resemble:
{ "id": 1, "description": "Some sample order", "costInCents": 250, "complete": false }
We will see later in this article that other attributes, such as hypermedia links, will also be included. Irrespective of hypermedia link, thinking about the expected request and response bodies for our REST endpoints allows us to devise test cases in advance that ensure we are handling and producing the expected results when we implement our REST endpoints.
With our domain model and REST endpoints defined, we can move to the last piece of the puzzle: How to store our orders. For example, when we create a new order, we need some means of storing that order so that a client, at some future time, can retrieve the created order.
In a true REST web service, we would decide on the best database or persistence framework that supports our domain model and design a persistence layer interface to use to interact with this database. For example, we could select a Neo4j database if our data was well-suited for a graph domain model, or MongoDB if our domain model fits nicely into collections. In the case of our system, for simplicity, we will use an in-memory persistence layer. Although there are various useful in-memory databases, our model is simple enough to create the in-memory database ourselves. In doing so, we will see the basic functionality of a database attached to a REST service, as well as understand the simple interfaces that are common among repositories in RESTful services.
At this point in our design, we have three discrete sections of our system: (1) a domain model, (2) a series of REST endpoints, and (3) a means of storing our domain objects, or a persistence layer. This set of three sections is so common, it has its own name: A 3-Tier application. Whatsmore, Martin Fowler has written an entire book, Patterns of Enterprise Architecture, on the patterns that surround this application architecture. The three tiers in this architecture are (1) presentation, (2) domain, and (3) data source (used interchangeably with persistence layer). In our case, our REST endpoints map to the presentation layer, our order domain model maps to the domain layer, and our in-memory database maps to the data source layer.
Although these three layers are usually depicted with one stacked on top of the other, with the presentation layer at the top, closest to the user, the domain layer in the middle, and the data source layer on the bottom, it can be more helpful to look at this architecture in terms of its interactions, as illustrated in the following diagram.
There is an important addition that is made to our architecture: Domain objects are not sent directly to the user. Instead, they are wrapped in resources and the resources are provided to the user. This provides a level of indirection between the domain object and how we present the domain object to the user. For example, if we wish to present the user with a different name for a field in our domain model (say orderName
instead of simply name
), we can do so using a resource. Although this level of indirection is very useful in decoupling our presentation from the domain model, it does allow duplication to sneak in. In most cases, the resource will resemble the interface of the domain object, with a few minor additions. This issue is addressed later when we implement our presentation layer.
The resource object also provides an apt place for us to introduce our hypermedia links. According to the Richardson model for REST web services, hypermedia-driven services are the highest capability level of a REST application and provide important information associated with the resource data. For example, we can provide links for deleting or updating the resource, which removes the need for the client consuming our REST web service to know the REST endpoints for these actions. In practice, the returned resource (deserialized to JSON) may resemble the following:
{ "id": 1, "description": "Some sample order", "costInCents": 250, "complete": false "_links": { "self": { "href": "http://localhost:8080/order/1" }, "update": { "href": "http://localhost:8080/order/1" }, "delete": { "href": "http://localhost:8080/order/1" } } }
Given these links, the consumer is no longer required to build the URLs for the update, delete, or self-reference REST endpoints. Instead, it can simply use the links provided in our hypermedia-driven response. Not only does this reduces the logic necessary for interacting with our REST web service (no longer do the URLs need to be built), but it also encapsulates the logic for the construction of the URLs. For example, suppose we change our REST service to require a query parameter, such as sorting: if we provide the links to the consumer, we can adjust for that change, without making any changes to the consumer:
{ "id": 1, "description": "Some sample order", "costInCents": 250, "complete": false "_links": { "self": { "href": "http://localhost:8080/order/1?sorting=default" }, "update": { "href": "http://localhost:8080/order/1?sorting=default" }, "delete": { "href": "http://localhost:8080/order/1?sorting=default" } } }
Although generating these links could be tedious and subject to a large number of bugs (i.e. what if the IP address of the machine hosting the web service changes?), the Spring Hypermedia as the Engine of Application State (HATEOAS, commonly pronounced hay-tee-os) framework provides numerous classes and builders that allow us to create these links with ease. This topic will be explored further when we delve into the implementation of our presentation layer.
Before moving to the implementation of our web service, we must pull our design together and devise a plan of action to create it. At the moment, we have a single domain object, Order, instances of whom will be persisted in an in-memory database and served up (within a resource) to clients using our REST endpoints. This design leaves us with four main steps:
Finding where to start can be difficult, but we will take a systematic approach to creating our web service. Start with the layer that depends on the fewest other layers and is depended on by the greatest number of layers. Thus, we will first implement the domain layer, which does not depend on the other two layers but is depended on by both. Then we will implement the data source layer, which depends on the domain layer and is likewise depended on by the presentation layer. Lastly, we will implement the presentation layer, which is not depended on by any other layer but depends on both the data source and domain layers.
The first step in creating our RESTful web service is creating the domain layer. While our domain layer is very simple, we will soon see that it requires some very specific details to be accounted for in order to be properly used by the rest of our system. The first of these details is identity.
All domain objects that will be persisted must have some unique means of identity; the simplest among these is a unique ID. While there are many data structures that can be used as IDs, such as Universally Unique IDs (UUIDs), we will keep it as simple from the start and use a numeric value. Each class in our domain layer must, therefore, have a means of obtaining the ID associated with the object. In addition, to allow our persistence layer to set the ID of a newly created domain object, we must have a means of setting the ID for an object as well. Since we do not want the persistence layer to depend on any concrete class in our domain layer for this functionality, we will create an interface to accomplish this task:
public interface Identifiable extends org.springframework.hateoas.Identifiable<Long> { public void setId(Long id); }
While we could have simply created an interface with a getter and a setter for ID, we instead extend the Identifiable
interface provided by the Spring HATEOAS framework. This interface provides a getId()
method and extending this interface allows us to use our domain class within the Spring HATEOAS framework, which will come in handy when we implement the presentation layer. With our identity interface complete, we can now create our Order
class:
public class Order implements Identifiable { private Long id; private String description; private long costInCents; private boolean isComplete; @Override public Long getId() { return id; } @Override public void setId(Long id) { this.id = id; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public void setCostInCents(long cost) { costInCents = cost; } public long getCostInCents() { return costInCents; } public void setComplete(boolean isComplete) { this.isComplete = isComplete; } public void markComplete() { setComplete(true); } public void markIncomplete() { setComplete(false); } public boolean isComplete() { return isComplete; } }
Our Order class is strikingly simple, but that is the beauty of separating domain classes from the remainder of the system: we can create Plain Old Java Objects (POJOs) that are independent of the rest of the intricacies and idiosyncrasies of our web service (such as how the POJO is persisted and how it is presented through a RESTful interface). Due to the simplicity of this class (it is only composed of getters and setters), we have forgone unit tests.
Although automated unit tests are important, they should be used with discretion. If our Order
class contained complicated business logic, such as "an order can only be created between 8am and 5pm EST," we would be wise to implement this logic by first creating a test that exercises this functionality and implement our Order
class to pass this test. As more functionality is added, we can create more tests and implement this functionality to pass these tests as well (the core of TDD). This leaves us with a general rule about unit testing, especially for domain objects:
Unit tests can be forgone for simple methods, such as getters and setters, but should be used in conjunction with TDD for more complex methods, such as those containing business logic.
It is also important to note that we have created three methods for setting the completion status of our order: (1) setComplete
, (2) markComplete
, and (3) markIncomplete
. Although it is usually not good practice to set the value of a boolean flag using a setter that takes a boolean argument, we have added this method to our Order
class because it allows for simplified updated logic. For example, if we did not have this method, we would have to update the completion status of an order in a manner akin to:
public void updateOrder(Order original, Order updated) { // Update the other fields of the order if (updated.isComplete()) { original.markComplete(); } else { original.makeIncomplete(); } }
By providing a setter with a boolean argument, we can reduce this update logic to:
public void updateOrder(Order original, Order updated) { // Update the other fields of the order original.setComplete(updated.isComplete()); }
We have also provided the remaining two setter methods for the completion state to allow for clients to set the completion status in the recommended manner (without passing a boolean flag to a setter method).
In general, we want to externalize the update logic of our domain objects. Instead of creating anupdate(Order updated)
method within our Order
class, we leave it up to external clients to perform the update. The reasoning behind this is that our Order
class does not have enough knowledge of the update process needed by external clients to internalize this logic. For example, when we update an Order
in the persistence layer, do we update the ID of our order or leave the original ID? That is a decision that should be made in the persistence layer, and therefore, we leave it up to the persistence layer to implement the update logic for our Order
class.
The first step to developing our data source layer is to define the requirements that this layer must fulfill. At its most basic level, our data source must be able to perform the following fundamental operations:
These five operations are common among a vast majority of data sources, and unsurprisingly, follows the basic set of CRUD operations. In this particular case, we create a specialization of the read operation (the R in CRUD) by allowing a caller to supply an ID and retrieve only the order that corresponds to that ID (if such an order exists).
This specialization is particularly useful when dealing with databases, where reads and writes are relatively expensive operations. If we were only capable of retrieving all order and then filtering by ID to find the order of interest, we would wastefully perform a read on each and every order in the database. In a large system with possibly thousands or even millions of entries, this strategy is completely untenable. Instead, we provide methods to target our searches for orders, which in turn allows our data source to optimize the logic used to retrieve entries from its database.
Although we will use an in-memory database in our system, we will include this specialized retrieval method in order to maintain a consistent interface with clients. Although we internally know that the data source is an in-memory database, any clients should not have to change depending on our internal data source implementation.
Since we will be using an in-memory persistence scheme, we cannot rely on a database to provide a new ID for each domain class that we persist. Instead, we will need to create an ID generator that will provide us with unique IDs for each of the Order
s we will persist. Our implementation of this generator is as follows:
@Component @Scope(BeanDefinition.SCOPE_PROTOTYPE) public class IdGenerator { private AtomicLong nextId = new AtomicLong(1); public long getNextId() { return nextId.getAndIncrement(); } }
Although our ID generator is very simple, there are some important aspects that require special attention. First, we have marked our generator as a Spring component using the @Component
annotation. This annotation allows our Spring boot application to recognize (through component scanning) our generator class as an injectable component. This will allow us to use the@Autowired
annotation in other classes and have our component injected without having to create the generator using the new
operator. For more information on Spring DI and autowiring, see the Spring Reference Document for Inversion of Control (IoC) Container.
We have also denoted that the scope of the generator (using the @Scope
annotation) is PROTOTYPE
, which ensures that a new object is created each time is it autowired. By default, when Spring injects an autowired dependency, it treats each component as a singleton, injecting the same object into each autowired location (within the same context). Using this default logic would result in each caller receiving the same generator, which would result in inconsistent ID generation.
For example, suppose we inject an ID generator into two data sources. If the same generator is used, if source one gets an ID, the resulting ID will be 1. If source two then requests an ID, the resulting ID will then be 2. This is contrary to our desired scheme, where each data source starts with an ID of 1 and increments only when a new object is created within that data source (the IDs can be differentiated based on the type of the object, i.e. "this is order 1" and "this is user 1"). Thus, the first object created in data source one should have an ID of 1, while the first object created in data source two should have an ID of 1 as well. This can only be accomplished if we have different ID generator objects, hence the prototype scoping of our ID generator class.
A second point is the use of AtomicLong
to store our next ID. It may be tempting to use a simple long to store the next ID, but this could lead to a very nuanced situation: What if two different calls are made to generate an ID? Since we are working within the realm of a concurrent web application, if we forgo any type of synchronization logic, we would run into a classic race condition, where two identical IDs may be produced. In order to eliminate these difficult-to-debug issues, we use the built-in concurrency mechanisms provided by Java. This ensures that even if two callers request IDs at the same time, each will be unique and the consistency of the next ID will be maintained.
With our ID generator in place, we are now ready to implement our data source. Since much of the in-memory logic is common for all types of objects, we will create an Abstract Base Class (ABC) that contains the core logic for managing our persisted objects (note that Spring uses the nomenclature Repository to mean a data source, and hence we follow the same convention):
public abstract class InMemoryRepository<T extends Identifiable> { @Autowired private IdGenerator idGenerator; private List<T> elements = Collections.synchronizedList(new ArrayList<>()); public T create(T element) { elements.add(element); element.setId(idGenerator.getNextId()); return element; } public boolean delete(Long id) { return elements.removeIf(element -> element.getId().equals(id)); } public List<T> findAll() { return elements; } public Optional<T> findById(Long id) { return elements.stream().filter(e -> e.getId().equals(id)).findFirst(); } public int getCount() { return elements.size(); } public void clear() { elements.clear(); } public boolean update(Long id, T updated) { if (updated == null) { return false; } else { Optional<T> element = findById(id); element.ifPresent(original -> updateIfExists(original, updated)); return element.isPresent(); } } protected abstract void updateIfExists(T original, T desired); }
The basic premise of this InMemoryRepository
is simple: Provide a mechanism for all five of the previously enumerated persistence methods for any object that implements the Identifiable
interface. This is where the importance of our Identifiable
interface is made apparent: Our in-memory repository knows nothing about the objects it is storing, except that it can get and set the ID of those objects. Although we have created a dependency between the data source and domain layers, we have made that dependency very weak, where the data source layer only depends on an interface (rather than an abstract or concrete class that contains executable code) with only two methods. The fact that this interface is very unlikely to change also ensures that any changes to the domain layer will be unlikely to affect the data source layer.
As previously stated, our ID generator is capable of being injected into other classes, and we have done so using the @Autowired
annotation. In addition, we have used a synchronized list to store our managed objects. For the same reason as the AtomicLong
in our ID generator, we must ensure that we do not allow for a race condition if multiple callers try to perform concurrent operations on our data source.
While each of the operations is simple in their implementation (due to the simple in-memory design), there is one piece of information that requires knowledge of the stored object to perform: an update if the object exists. As previously expressed, our domain objects do not have enough knowledge on how to perform an update on themselves, therefore, it is the responsibility of our data source to perform this update.
Since this operation requires information we do not have, we mark it as abstract and require that concrete subclasses provide an implementation of this template method. In our case, we have only one subclass:
@Repository public class OrderRepository extends InMemoryRepository<Order> { protected void updateIfExists(Order original, Order updated) { original.setDescription(updated.getDescription()); original.setCostInCents(updated.getCostInCents()); original.setComplete(updated.isComplete()); } }
This concrete class simply provides an implementation for updating all non-ID fields of an Order
. We have also decorated our class with the @Repository
annotation, which is the same as the@Component
annotation (allows our class to be injected into other classes by the Spring DI framework) but also includes additional database-related capabilities. For more information, see the Javadocs for the @Repository annotation.
Although we were required to account for synchronization and DI capabilities, our entire data source layer consists of three classes and less than 70 lines of code (not counting package and import declarations). This is where the power of using a framework such as Spring comes to the forefront: many of the common tasks and conventions in web application development have been codified and made as simple as decorating classes and fields with the correct annotations. With the completion of our data source layer, we are ready to implement the final layer of our RESTful web service: the presentation layer.
Without the aid of a web application framework, creating a presentation layer would be a daunting task, but after many years, the patterns and conventional designs of RESTful web services have been captured in the Spring Model-View-Controller (MVC) framework. This framework allows us to create RESTful endpoints with much the same ease as we saw during the development of our data source layer, using annotations and helper classes to do most of the heavy lifting for us.
Starting with the class that is the most depended on and requires the least dependencies, we will create the OrderResource
first:
public class OrderResource extends ResourceSupport { private final long id; private final String description; private final long costInCents; private final boolean isComplete; public OrderResource(Order order) { id = order.getId(); description = order.getDescription(); costInCents = order.getCostInCents(); isComplete = order.isComplete(); } @JsonProperty("id") public Long getResourceId() { return id; } public String getDescription() { return description; } public long getCostInCents() { return costInCents; } public boolean isComplete() { return isComplete; } }
The OrderResource
class is strikingly similar to our Order
class, but with a few main differences. First, we inherit from the ResourceSupport
class provided by the Spring HATEOAS packages, which allows us to attach links to our resource (we will revisit this topic shortly). Second, we have made all of the fields final
. Although this is not a requirement, it is a good practice because we wish to restrict the values of the fields in the resource from changing after they have been set, ensuring that they reflect the values of the Order
class for which it is acting as a resource.
In this simple case, the OrderResource
class has a one-to-one field relationship with the Order
class, which begs the question: Why not just use the Order
class? The primary reason to create a separate resource class is that the resource class allows us to implement a level of indirection between the Order
class itself and how that class is presented. In this case, although the fields are the same, we are also attaching links to the fields in the Order
class. Without a dedicated resource class, we would have to intermix the domain logic with the presentation logic, which would cause serious dependency issues in a large-scale system.
A happy medium between the duplication between the OrderResource
and Order
classes is the use of Jackson annotations in order to use the fields of the Order
class to act as the fields of theOrderResource
class when the OrderResource
class is serialized to JSON. In the Spring MVC framework, our resource class will be converted to JSON before being sent over HTTP to the consumers of our web service. The default serialization process takes each of the fields of our class and uses the field names as the keys and the field values as the values. For example, a serialized Order
class may resemble the following:
{ "id": 1, "description": "Some test description", "costInCents": 200, "complete": true }
If we tried to directly embed the Order
object inside our OrderResource
object (implemented our OrderResource
class to have a single field that holds an Order
object in order to reuse the fields of the Order
object), we would end up with the following:
{ "order": { "id": 1, "description": "Some test description", "costInCents": 200, "complete": true } }
Instead, we can annotate the nested Order
object with the Jackson @JsonUnwrapped
annotation, which removes the nesting when the OrderResource
object is serialized. Such an implementation would result in the following definition for the OrderResource
class:
public class OrderResource extends ResourceSupport { @JsonUnwrapped private final Order order; public OrderResource(Order order) { this.order = order; } }
Serializing this class would result in our desired JSON:
{ "id": 1, "description": "Some test description", "costInCents": 200, "complete": true }
While unwrapping the nested Order
object significantly reduces the size of the OrderResource
class, it has one drawback: When the internal fields of the Order
changes, so do the resulting serialized JSON produced from the OrderResource
object. In essence, we have coupled theOrderResource
class and the internal structure of the Order
class, breaking encapsulation. We walk a fine line between the duplication seen in the first approach (replicating the Order
fields within OrderResource
) and the coupling seen in the JSON unwrapping approach. Both have advantages and drawbacks, and judgment and experience will dictate the best times to use each.
One final note on our OrderResource
class: We cannot use the getId()
method as our getter for our ID since the ResourceSupport
class has a default getId()
method that returns a link. Therefore, we use the getResourceId()
method as our getter for our id
field; thus, we have to annotate our getResourceId()
method since, by default, our resource would serialize the ID field to resourceId
due to the name of the getter method. To force this property to be serialized to id
, we use the @JsonProperty("id")
annotation.
With our resource class in place, we need to implement an assembler that will create anOrderResource
from an Order
domain object. To do this, we will focus on two methods: (1)toResource
, which consumes a single Order
object and produces an OrderResource
object, and (2) toResourceCollection
, which consumes a collection of Order
objects and produces a collection of OrderResource
objects. Since we can implement the latter in terms of the former, we will abstract this relationship into an ABC:
public abstract class ResourceAssembler<DomainType, ResourceType> { public abstract ResourceType toResource(DomainType domainObject); public Collection<ResourceType> toResourceCollection(Collection<DomainType> domainObjects) { return domainObjects.stream().map(o -> toResource(o)).collect(Collectors.toList()); } }
In our implementation of toResourceCollection
, we simply map the consumed list of Order
objects to OrderResource
objects by calling the toResource
method on each of the Order
objects in the consumed list. We then create an OrderResourceAssembler
class that provides an implementation for the toResource
method:
@Component public class OrderResourceAssembler extends ResourceAssembler<Order, OrderResource> { @Autowired protected EntityLinks entityLinks; private static final String UPDATE_REL = "update"; private static final String DELETE_REL = "delete"; @Override public OrderResource toResource(Order order) { OrderResource resource = new OrderResource(order); final Link selfLink = entityLinks.linkToSingleResource(order); resource.add(selfLink.withSelfRel()); resource.add(selfLink.withRel(UPDATE_REL)); resource.add(selfLink.withRel(DELETE_REL)); return resource; } }
In this concrete class, we simply extend the ResourceAssembler
ABC, declaring the domain object type and the resource object type, respectively, as the generic arguments. We are already familiar with the @Component
annotation, which will allow us to inject this assembler into other classes as needed. The autowiring of the EntityLinks
class requires some further explanation.
As we have already seen, creating links for a resource can be a difficult task. In order to remedy this difficulty, the Spring HATEOAS framework includes an EntityLinks
class that provides helper methods that provide for the construction of links using just the domain object type. This is accomplished by having a REST endpoint class (which we will define shortly) use the@ExposesResourceFor(Class domainClass)
annotation, which tells the HATEOAS framework that links built for the supplied domain class should point to that REST endpoint.
For example, suppose we create a REST endpoint that allows a client to create, retrieve, update, and delete Order
objects. In order to allow for Spring HATEOAS to help in the creation of links to delete and update Order
objects, we must decorate the REST endpoint class with@ExposesResourceFor(Order.class)
. We will see shortly how this ties into the path used by the REST endpoint. For the time being, it suffices to say that the EntityLinks
class allows us to create links to objects that have a corresponding @ExposesResourceFor
somewhere in our system. For more information on how this exposure occurs, see the Spring HATEOAS reference documentation.
The remainder of our OrderResourceAssembler
class is devoted to the creation of OrderResource
objects from Order
objects. The creation of the resource object itself is straightforward, but the creation of the links requires some explanation. Using the EntityLinks
class, we can create a link to our own resource by specifying (using the linkToSingleResource
method) that we wish to create a link to an Order
, which uses the Spring HATEOAS Identifiable
interface to obtain the ID of the object. We then reuse this link to create three separate links: (1) a self link, (2) an update link, and (3) a delete link. We set the relative value (rel) of the link using the withRel
method. We then return the fully constructed resource object. Given the three links we have created, our resulting OrderResource
, when serialized to JSON, looks as follows:
{ "id": 1, "description": "Some sample order", "costInCents": 250, "complete": false "_links": { "self": { "href": "http://localhost:8080/order/1" }, "udpate": { "href": "http://localhost:8080/order/1" }, "delete": { "href": "http://localhost:8080/order/1" } } }
The self
link tells the consumer that if a link to this resource is needed, the provided HREF can be used. The update
and delete
links tell the consumer that if this resource should be updated or deleted, respectively, the provided HREF should be used.
With the OrderResource
class and its assembler completed, we can move onto the last, and arguably most essential step: creating the REST endpoints. In the Spring MVC framework, a REST endpoint is created by implementing a controller class (a class annotated with @Controller
or @RestController
) and adding methods that correspond to the desired REST endpoints. We will list our controller class first and then explain the meaning of each section of code:
@CrossOrigin(origins = "*") @RestController @ExposesResourceFor(Order.class) @RequestMapping(value = "/order", produces = "application/json") public class OrderController { @Autowired private OrderRepository repository; @Autowired private OrderResourceAssembler assembler; @RequestMapping(method = RequestMethod.GET) public ResponseEntity<Collection<OrderResource>> findAllOrders() { List<Order> orders = repository.findAll(); return new ResponseEntity<>(assembler.toResourceCollection(orders), HttpStatus.OK); } @RequestMapping(method = RequestMethod.POST, consumes = "application/json") public ResponseEntity<OrderResource> createOrder(@RequestBody Order order) { Order createdOrder = repository.create(order); return new ResponseEntity<>(assembler.toResource(createdOrder), HttpStatus.CREATED); } @RequestMapping(value = "/{id}", method = RequestMethod.GET) public ResponseEntity<OrderResource> findOrderById(@PathVariable Long id) { Optional<Order> order = repository.findById(id); if (order.isPresent()) { return new ResponseEntity<>(assembler.toResource(order.get()), HttpStatus.OK); } else { return new ResponseEntity<>(HttpStatus.NOT_FOUND); } } @RequestMapping(value = "/{id}", method = RequestMethod.DELETE) public ResponseEntity<Void> deleteOrder(@PathVariable Long id) { boolean wasDeleted = repository.delete(id); HttpStatus responseStatus = wasDeleted ? HttpStatus.NO_CONTENT : HttpStatus.NOT_FOUND; return new ResponseEntity<>(responseStatus); } @RequestMapping(value = "/{id}", method = RequestMethod.PUT, consumes = "application/json") public ResponseEntity<OrderResource> updateOrder(@PathVariable Long id, @RequestBody Order updatedOrder) { boolean wasUpdated = repository.update(id, updatedOrder); if (wasUpdated) { return findOrderById(id); } else { return new ResponseEntity<>(HttpStatus.NOT_FOUND); } } }
Note that the @CrossOrigin
annotation provides for support of Cross Origin Resource Sharing (CORS) for our controller; for more information, see the Spring Enabling Cross Origin Requests for a RESTful Web Service tutorial. The @RestController
a annotation, as stated above, tells Spring that this class is a controller and will include REST endpoints. This annotation is coupled with the @ExposesResourceFor(Order.class)
annotation, which denotes that if a link is needed to anOrder
object, this controller should be used to provide the path for that link. The path information for the controller (i.e. the /order
in http://localhost:8080/order
) is supplied using the @RequestMapping
, which maps the supplied string as the path for the controller.
For example, if the URL of the machine that the controller is executed on is http://localhost:8080
, the path to reach this controller will be http://localhost:8080/order
. We also include the type of the data produced by the controller, or application/json
, in the request mapping to instruct Spring that this controller class produces JSON output (Spring will, in turn, include Content-Type: application/json
in the header of any HTTP responses sent).
Within the controller class, we inject the OrderRepository
and OrderResourceAssembler
components which will allow us to access the stored Order
objects and create OrderResource
objects from these domain objects, respectively. Although we have a dependency to the data store layer within our controller class, we lean on Spring to provide us with an instance of the OrderRepository
, ensuring that we are only dependent on the external interface of the repository, rather than on the creation process.
The last portion of the controller class is the most crucial: the methods that will perform the REST operations. In order to declare a new REST endpoint, we use the @RequestMapping
to annotate a method and supply the HTTP verb that we wish to use. For example, if we look at the findAllOrders
method,
@RequestMapping(method = RequestMethod.GET) public ResponseEntity<Collection<OrderResource>> findAllOrders() { List<Order> orders = repository.findAll(); return new ResponseEntity<>(assembler.toResourceCollection(orders), HttpStatus.OK); }
we use the @RequsetMapping
annotation to inform the Spring MVC framework thatfindAllOrders
is intended to be called when an HTTP GET is received. This process is called mapping, and as we will see later, Spring will establish this mapping during deployment. It is important to note that the path of the mapping is relative to the path declared at the controller level. For example, since our OrderController
is annotated with @RequestMapping(“/order”)
and no path is explicitly declared for our findAllOrders
method, the path used for this method is /orders
.
The return type of ourfindAllOrders
method is particularly important. The ResponseEntity
class is provided by the Spring MVC framework and represents an HTTP response to an HTTP request. The generic parameter of this class represents the class of the object that will be contained in the response body of the call; this response body object will be serialized to JSON and then returned to the requesting client as a JSON string.
In this case, we will return a collection of OrderResource
objects (the list of all existing orders) after obtaining them from the OrderRepository
. This list is then assembled into a list of OrderResource
objects and packed into a ResponseEntity
object in the following line:
return new ResponseEntity<>(assembler.toResourceCollection(orders), HttpStatus.OK);
The second argument to the new ResponseEntity
class represents the HTTP status code that should be used if no exceptions or errors occur while sending the response to the requesting client. In this case, we will accompany our collection of OrderResource
objects with an HTTP status code of 200 OK
using the enum value HttpStatus.OK
.
The remainder of the REST endpoints uses this same basic structure, with a few notable exceptions. In the case of our findOrderById
, deleteOrder
, and updateOrder
methods, we adjust the path of the REST endpoint to include /{id}
. As previously stated, this path is relative to the controller path, and thus the resolved path for each of these methods is /order/{id}
. The use of curly braces ({
and }
) in a path denotes a variable whose value will be resolved to a parameter in the method it annotates. For example, if we look at the findOrderById
method
@RequestMapping(value = "/{id}", method = RequestMethod.GET) public ResponseEntity<OrderResource> findOrderById(@PathVariable Long id) { // …Body hidden for brevity… }
we see that the name of the parameter (id
) matches the variable in the path and is decorated with the @PathVariable
annotation. The combination of these two adornments tells Spring that we wish to have the value of the id
variable in the path passed as the runtime value of theid
parameter in our findOrderById
method. For example, if a GET request is made to /order/1
, the call to our findOrderById
method will be findOrderById(1)
.
Another difference that must be addressed is the return value of the deleteOrder
method: The return value of ResponseEntity<Void>
tells Spring MVC that we are returning a ResponseEntity
with an associated HTTP status code but we are not including a response body (the response body is void). This results in an empty response body for the response sent to the requester.
The last difference deals with the parameters of updateOrder
. In the case of updating an order, we must use the contents of the request body, but doing so as a string would be tedious. In that case, we would have to parse the string and extract the desired data, ensuring that we do not make an easy error during the parsing process. Instead, Spring MVC will deserialize the request body into an object of our choice. If we look at the updateOrder
method
@RequestMapping(value = "/{id}", method = RequestMethod.PUT, consumes = "application/json") public ResponseEntity<OrderResource> updateOrder(@PathVariable Long id, @RequestBody Order updatedOrder) { // …Body hidden for brevity… }
we see that the updatedOrder
parameter is decorated with the @RequestBody
annotation. This instructs Spring MVC to deserialize the HTTP request body into the updateOrder
parameter, which takes the JSON request body (denoted by the consumes = “application/json”
field in the @RequestMapping
annotation) and deserializes it into an Order
object (the type of updateOrder
). We are then able to use the updatedOrder
parameter in the body of ourupdateOrder
method.
With our REST endpoints defined, we are now ready to create the main method that will be executed to start our RESTful web service.
The main method used to start our web service is as follows:
@EnableEntityLinks @EnableHypermediaSupport(type = HypermediaType.HAL) @SpringBootApplication public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } }
The @EnableEntityLinks
annotation configures Spring to include support for the EntityLinks
class in our system (allowing us to inject the EntityLinks
object). Likewise, the@EnableHypermediaSupport
annotation instructs Spring to include support for HATEOAS, using the Hypermedia Application Language (HAL) when producing links. The final annotation,@SpringBootApplication
, marks our application a Spring Boot application, which configures the boilerplate code needed to start Spring and also instructs Spring to component scan our packages to find injectable classes (such as those annotated with @Component
or @Repository
).
The remainder of the main method simply runs the Spring Boot application, passing the current class and the command line arguments to the run
method. Using Spring Boot, starting our web application is nearly trivial, which leaves us with only one thing left to do: Deploy and consume our RESTful web service.
Since we are using Maven to manage the dependencies and build the lifecycle of our application, and Spring Boot to configure our application, we can build our project and start the HTTP server using the following command (once Maven has been installed):
mvn spring-boot:run
This will host the REST web service on http://localhost:8080
. If we look closely at the output, we can see the following statements:
INFO 15204 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/order/{id}],methods=[PUT],produces=[application/json]}" onto public org.springframework.http.ResponseEntity<com.dzone.albanoj2.example.rest.resource.OrderResource> com.dzone.albanoj2.example.rest.controller.OrderController.updateOrder(java.lang.Long,com.dzone.albanoj2.example.rest.domain.Order) INFO 15204 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/order],methods=[POST],produces=[application/json]}" onto public org.springframework.http.ResponseEntity<com.dzone.albanoj2.example.rest.resource.OrderResource> com.dzone.albanoj2.example.rest.controller.OrderController.createOrder(com.dzone.albanoj2.example.rest.domain.Order) INFO 15204 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/order/{id}],methods=[GET],produces=[application/json]}" onto public org.springframework.http.ResponseEntity<com.dzone.albanoj2.example.rest.resource.OrderResource> com.dzone.albanoj2.example.rest.controller.OrderController.findOrderById(java.lang.Long) INFO 15204 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/order],methods=[GET],produces=[application/json]}" onto public org.springframework.http.ResponseEntity<java.util.Collection<com.dzone.albanoj2.example.rest.resource.OrderResource>> com.dzone.albanoj2.example.rest.controller.OrderController.findAllOrders() INFO 15204 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/order/{id}],methods=[DELETE],produces=[application/json]}" onto public org.springframework.http.ResponseEntity<java.lang.Void> com.dzone.albanoj2.example.rest.controller.OrderController.deleteOrder(java.lang.Long)
This output explicitly tells us that Spring has successfully found our controller class and has mapped the enumerated URLs to the respective methods of our OrderController
class. With our web application started, we can retrieve the list of all existing orders by navigating tohttp://localhost:8080/order
in any browser. When we enter this URL into a browser, the browser sends an HTTP GET to our web service, which in turn calls the findAllOrders()
method of our OrderController
class and returns a list of all orders, wrapped asOrderResource
objects and displayed as serialized JSON. By default, the following should be returned when we navigate to http://localhost:8080/order
:
[]
Since we have not created any orders yet, we see an empty list. In order to create an order, we must use an HTTP client program, such as Postman (for more information on how to use Postman to send HTTP requests, see the official Postman documentation). Using this client, we can create an order by setting the request type to POST, the request URL to http://localhost:8080/order
, and the request body to the following:
{ "description": "Our first test order", "costInCents": 450, "complete": false }
After clicking the Send button, we should see a response status code of 201 Created
and response body of the following:
{ "description": "Our first test order", "costInCents": 450, "complete": false, "_links": { "self": { "href": "http://localhost:8080/order/1" }, "update": { "href": "http://localhost:8080/order/1" }, "delete": { "href": "http://localhost:8080/order/1" } }, "id": 1 }
We can now navigate back to http://localhost:8080/order
in our browser (or perform a GET on that address in Postman) and see the following response:
[ { "description": "Our first test order", "costInCents": 450, "complete": false, "_links": { "self": { "href": "http://localhost:8080/order/1" }, "update": { "href": "http://localhost:8080/order/1" }, "delete": { "href": "http://localhost:8080/order/1" } }, "id": 1 } ]
This is simply a list containing our only order. If we wish to make an update to our existing order, we can simply execute a PUT to http://localhost:8080/order/1
(note the /1
at the end of the URL, denoting that we are directing our request to the order with ID 1
) with the following request body:
{ "description": "Some updated description", "costInCents": 700, "complete": true }
If we then perform a GET on http://localhost:8080/order/1
, we will see the following response:
{ "description": "Some updated description", "costInCents": 700, "complete": true, "_links": { "self": { "href": "http://localhost:8080/order/1" }, "update": { "href": "http://localhost:8080/order/1" }, "delete": { "href": "http://localhost:8080/order/1" } }, "id": 1 }
The last remaining task we can complete is to delete the order. To do this, simply execute a DELETE with the URL http://localhost:8080/order/1
, which should return a response status code of 204 No Content
. To check that we have successfully deleted the order, we can execute a GET on http://localhost:8080/order/1
and we should receive a response status code of 404 Not Found
.
With the completion of this delete call, we have successfully exercised all of the CRUD operations that we implemented for orders in our REST web service. An enumeration of the calls we have made is tabulated below:
RESTful web services are at the core of most systems today. From Google to Amazon to Netflix, there is hardly a major software system that does not include some form of a RESTful web service that interacts with clients. At the origin of the REST revolution, creating these applications was a tedious task, but with the advent of frameworks such as Spring, and supplemental frameworks such as Spring Boot, this task has been tremendously simplified. With such tools, we are left to focus on the simple creation of a domain layer, persistence (data source) layer, presentation layer, and minimal configuration code to spin up a web service.
Thanks for reading ❤
If you liked this post, share it with all of your programming buddies!
Follow us on Facebook | Twitter
☞ Spring & Hibernate for Beginners (includes Spring Boot)
☞ Spring Framework Master Class - Learn Spring the Modern Way!
☞ Master Microservices with Spring Boot and Spring Cloud
☞ Spring Boot and OAuth2: Getting the Authorization Code
☞ An Introduction to Spring Boot
☞ How to build GraphQL APIs with Kotlin, Spring Boot, and MongoDB?
☞ Build a Rest API with Spring Boot using MySQL and JPA
☞ Angular 8 + Spring Boot 2.2: Build a CRUD App Today!
☞ Spring Boot vs. Spring MVC vs. Spring: How Do They Compare?
☞ Top 4 Spring Annotations for Java Developer in 2019
#web-service #java #spring-boot #rest #api
1654075127
Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.
Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.
Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.
AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join
, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.
It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE
index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.
1. Advantages of vertical sharding
2. Disadvantages of vertical sharding
Join
can only be implemented by interface aggregation, which will increase the complexity of development.3. Advantages of horizontal sharding
4. Disadvantages of horizontal sharding
Join
is poor.Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.
ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.
The characteristics of Sharding-JDBC are:
Hybrid Structure Integrating Sharding-JDBC and Applications
Sharding-JDBC’s core concepts
Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.
Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.
Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.
Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.
Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.
Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0
version.
git clone
https://github.com/apache/shardingsphere-example.git
Project description:
shardingsphere-example
├── example-core
│ ├── config-utility
│ ├── example-api
│ ├── example-raw-jdbc
│ ├── example-spring-jpa #spring+jpa integration-based entity,repository
│ └── example-spring-mybatis
├── sharding-jdbc-example
│ ├── sharding-example
│ │ ├── sharding-raw-jdbc-example
│ │ ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
│ │ ├── sharding-spring-boot-mybatis-example
│ │ ├── sharding-spring-namespace-jpa-example
│ │ └── sharding-spring-namespace-mybatis-example
│ ├── orchestration-example
│ │ ├── orchestration-raw-jdbc-example
│ │ ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
│ │ └── orchestration-spring-namespace-example
│ ├── transaction-example
│ │ ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
│ │ └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
│ ├── other-feature-example
│ │ ├── hint-example
│ │ └── encrypt-example
├── sharding-proxy-example
│ └── sharding-proxy-boot-mybatis-example
└── src/resources
└── manual_schema.sql
Configuration file description:
application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties #library split profile only
application-sharding-master-slave.properties #sharding and read/write splitting profile
application-sharding-tables.properties #table split profile
application.properties #spring boot profile
Code logic description:
The following is the entry class of the Spring Boot application below. Execute it to run the project.
The execution logic of demo is as follows:
As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint
to meet users' requirements to write and read with strong consistency, and a read-only endpoint
to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog
-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint
.
Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.
ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard
through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.
Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.
application.properties spring boot
Master profile description:
You need to replace the green ones with your own environment configuration.
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-master-slave.properties sharding-jdbc
profile description:
spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true
As shown in the ShardingSphere-SQL log
figure below, the write SQL is executed on the ds_master
data source.
As shown in the ShardingSphere-SQL log
figure below, the read SQL is executed on the ds_slave
data source in the form of polling.
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_,
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_,
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1
Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.
@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
System.out.println("-------------- Process Success Begin ---------------");
List<Long> orderIds = insertData();
printData();
deleteData(orderIds);
printData();
System.out.println("-------------- Process Success Finish --------------");
}
The Aurora database environment adopts the configuration described in Section 2.2.1.
3.2.4.1 Verification process description
Spring-Boot
project2. Perform a failover on Aurora’s console
3. Execute the Rest API
request
4. Repeatedly execute POST
(http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.
5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-tables.properties sharding-jdbc
profile description
## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client
executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address
is a broadcast table, create a t_address
because there is only one master instance. Two physical tables t_order_0
and t_order_1
will be created when creating t_order
.
2. Write operation
As shown in the figure below, Logic SQL
inserts a record into t_order
. When Sharding-JDBC is executed, data will be distributed to t_order_0
and t_order_1
according to the table splitting rules.
When t_order
and t_order_item
are bound, the records associated with order_item
and order
are placed on the same physical table.
3. Read operation
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
The join
query operations on order
and order_item
under the unbound table will traverse all shards.
Create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, tables t_order
, t_order_item
,t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, physical tables will be created on ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.
3. Read operation
Query order
is routed to the corresponding Aurora instance according to the routing rules of the slave library .
Query Address
. Since address
is a broadcast table, an instance of address
will be randomly selected and queried from the nodes used.
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
As shown in the figure below, create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, physical tables t_order_01
, t_order_02
, t_order_item_01
,and t_order_item_02
and global table t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The read operation is similar to the library split function verification described in section2.4.3.
The following figure shows the physical table of the created database instance.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave
application-sharding-master-slave.properties sharding-jdbc
profile description
The url, name and password of the database need to be changed to your own database parameters.
spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username=
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username=
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username=
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url=
spring.shardingsphere.datasource.ds_master_1.username=
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The join
query operations on order
and order_item
under the binding table are shown below.
3. Conclusion
As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.
Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.
However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.
In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.
Author
Sun Jinhua
A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.
1655630160
Install via pip:
$ pip install pytumblr
Install from source:
$ git clone https://github.com/tumblr/pytumblr.git
$ cd pytumblr
$ python setup.py install
A pytumblr.TumblrRestClient
is the object you'll make all of your calls to the Tumblr API through. Creating one is this easy:
client = pytumblr.TumblrRestClient(
'<consumer_key>',
'<consumer_secret>',
'<oauth_token>',
'<oauth_secret>',
)
client.info() # Grabs the current user information
Two easy ways to get your credentials to are:
interactive_console.py
tool (if you already have a consumer key & secret)client.info() # get information about the authenticating user
client.dashboard() # get the dashboard for the authenticating user
client.likes() # get the likes for the authenticating user
client.following() # get the blogs followed by the authenticating user
client.follow('codingjester.tumblr.com') # follow a blog
client.unfollow('codingjester.tumblr.com') # unfollow a blog
client.like(id, reblogkey) # like a post
client.unlike(id, reblogkey) # unlike a post
client.blog_info(blogName) # get information about a blog
client.posts(blogName, **params) # get posts for a blog
client.avatar(blogName) # get the avatar for a blog
client.blog_likes(blogName) # get the likes on a blog
client.followers(blogName) # get the followers of a blog
client.blog_following(blogName) # get the publicly exposed blogs that [blogName] follows
client.queue(blogName) # get the queue for a given blog
client.submission(blogName) # get the submissions for a given blog
Creating posts
PyTumblr lets you create all of the various types that Tumblr supports. When using these types there are a few defaults that are able to be used with any post type.
The default supported types are described below.
We'll show examples throughout of these default examples while showcasing all the specific post types.
Creating a photo post
Creating a photo post supports a bunch of different options plus the described default options * caption - a string, the user supplied caption * link - a string, the "click-through" url for the photo * source - a string, the url for the photo you want to use (use this or the data parameter) * data - a list or string, a list of filepaths or a single file path for multipart file upload
#Creates a photo post using a source URL
client.create_photo(blogName, state="published", tags=["testing", "ok"],
source="https://68.media.tumblr.com/b965fbb2e501610a29d80ffb6fb3e1ad/tumblr_n55vdeTse11rn1906o1_500.jpg")
#Creates a photo post using a local filepath
client.create_photo(blogName, state="queue", tags=["testing", "ok"],
tweet="Woah this is an incredible sweet post [URL]",
data="/Users/johnb/path/to/my/image.jpg")
#Creates a photoset post using several local filepaths
client.create_photo(blogName, state="draft", tags=["jb is cool"], format="markdown",
data=["/Users/johnb/path/to/my/image.jpg", "/Users/johnb/Pictures/kittens.jpg"],
caption="## Mega sweet kittens")
Creating a text post
Creating a text post supports the same options as default and just a two other parameters * title - a string, the optional title for the post. Supports markdown or html * body - a string, the body of the of the post. Supports markdown or html
#Creating a text post
client.create_text(blogName, state="published", slug="testing-text-posts", title="Testing", body="testing1 2 3 4")
Creating a quote post
Creating a quote post supports the same options as default and two other parameter * quote - a string, the full text of the qote. Supports markdown or html * source - a string, the cited source. HTML supported
#Creating a quote post
client.create_quote(blogName, state="queue", quote="I am the Walrus", source="Ringo")
Creating a link post
#Create a link post
client.create_link(blogName, title="I like to search things, you should too.", url="https://duckduckgo.com",
description="Search is pretty cool when a duck does it.")
Creating a chat post
Creating a chat post supports the same options as default and two other parameters * title - a string, the title of the chat post * conversation - a string, the text of the conversation/chat, with diablog labels (no html)
#Create a chat post
chat = """John: Testing can be fun!
Renee: Testing is tedious and so are you.
John: Aw.
"""
client.create_chat(blogName, title="Renee just doesn't understand.", conversation=chat, tags=["renee", "testing"])
Creating an audio post
Creating an audio post allows for all default options and a has 3 other parameters. The only thing to keep in mind while dealing with audio posts is to make sure that you use the external_url parameter or data. You cannot use both at the same time. * caption - a string, the caption for your post * external_url - a string, the url of the site that hosts the audio file * data - a string, the filepath of the audio file you want to upload to Tumblr
#Creating an audio file
client.create_audio(blogName, caption="Rock out.", data="/Users/johnb/Music/my/new/sweet/album.mp3")
#lets use soundcloud!
client.create_audio(blogName, caption="Mega rock out.", external_url="https://soundcloud.com/skrillex/sets/recess")
Creating a video post
Creating a video post allows for all default options and has three other options. Like the other post types, it has some restrictions. You cannot use the embed and data parameters at the same time. * caption - a string, the caption for your post * embed - a string, the HTML embed code for the video * data - a string, the path of the file you want to upload
#Creating an upload from YouTube
client.create_video(blogName, caption="Jon Snow. Mega ridiculous sword.",
embed="http://www.youtube.com/watch?v=40pUYLacrj4")
#Creating a video post from local file
client.create_video(blogName, caption="testing", data="/Users/johnb/testing/ok/blah.mov")
Editing a post
Updating a post requires you knowing what type a post you're updating. You'll be able to supply to the post any of the options given above for updates.
client.edit_post(blogName, id=post_id, type="text", title="Updated")
client.edit_post(blogName, id=post_id, type="photo", data="/Users/johnb/mega/awesome.jpg")
Reblogging a Post
Reblogging a post just requires knowing the post id and the reblog key, which is supplied in the JSON of any post object.
client.reblog(blogName, id=125356, reblog_key="reblog_key")
Deleting a post
Deleting just requires that you own the post and have the post id
client.delete_post(blogName, 123456) # Deletes your post :(
A note on tags: When passing tags, as params, please pass them as a list (not a comma-separated string):
client.create_text(blogName, tags=['hello', 'world'], ...)
Getting notes for a post
In order to get the notes for a post, you need to have the post id and the blog that it is on.
data = client.notes(blogName, id='123456')
The results include a timestamp you can use to make future calls.
data = client.notes(blogName, id='123456', before_timestamp=data["_links"]["next"]["query_params"]["before_timestamp"])
# get posts with a given tag
client.tagged(tag, **params)
This client comes with a nice interactive console to run you through the OAuth process, grab your tokens (and store them for future use).
You'll need pyyaml
installed to run it, but then it's just:
$ python interactive-console.py
and away you go! Tokens are stored in ~/.tumblr
and are also shared by other Tumblr API clients like the Ruby client.
The tests (and coverage reports) are run with nose, like this:
python setup.py test
Author: tumblr
Source Code: https://github.com/tumblr/pytumblr
License: Apache-2.0 license
1624248441
Here over this article, we are discussing different REST specific annotations in Spring.
We can annotate classic controllers with the _@Controller_
annotation. This is simply a specialization of the _@Component_
class, which allows us to auto-detect implementation classes through classpath scanning.
We typically use @Controller_ it in combination with an _@RequestMapping_
annotation for request handling methods_.
_@RestController_
is a specialized version of the controller. It includes the _@Controller_
and _@ResponseBody_
annotations, and as a result, simplifies the controller implementation.
#spring #java #spring-boot #spring annotations for rest services #rest services #spring annotations
1652251528
Opencart REST API extensions - V3.x | Rest API Integration : OpenCart APIs is fully integrated with the OpenCart REST API. This is interact with your OpenCart site by sending and receiving data as JSON (JavaScript Object Notation) objects. Using the OpenCart REST API you can register the customers and purchasing the products and it provides data access to the content of OpenCart users like which is publicly accessible via the REST API. This APIs also provide the E-commerce Mobile Apps.
Opencart REST API
OCRESTAPI Module allows the customer purchasing product from the website it just like E-commerce APIs its also available mobile version APIs.
Opencart Rest APIs List
Customer Registration GET APIs.
Customer Registration POST APIs.
Customer Login GET APIs.
Customer Login POST APIs.
Checkout Confirm GET APIs.
Checkout Confirm POST APIs.
If you want to know Opencart REST API Any information, you can contact us at -
Skype: jks0586,
Email: letscmsdev@gmail.com,
Website: www.letscms.com, www.mlmtrees.com
Call/WhatsApp/WeChat: +91–9717478599.
Download : https://www.opencart.com/index.php?route=marketplace/extension/info&extension_id=43174&filter_search=ocrest%20api
View Documentation : https://www.letscms.com/documents/api/opencart-rest-api.html
More Information : https://www.letscms.com/blog/Rest-API-Opencart
VEDIO : https://vimeo.com/682154292
#opencart_api_for_android #Opencart_rest_admin_api #opencart_rest_api #Rest_API_Integration #oc_rest_api #rest_api_ecommerce #rest_api_mobile #rest_api_opencart #rest_api_github #rest_api_documentation #opencart_rest_admin_api #rest_api_for_opencart_mobile_app #opencart_shopping_cart_rest_api #opencart_json_api
1641641280
The goal of the project is to provide a flexible and configurable mechanism for writing simple services that can be exposed over HTTP.
The first exporter implemented is a JPA Repository exporter. This takes your JPA repositories and front-ends them with HTTP, allowing you full CRUD capability over your entities, to include managing associations.
ApplicationEvents
.This project is governed by the Spring Code of Conduct. By participating, you are expected to uphold this code of conduct. Please report unacceptable behavior to spring-code-of-conduct@pivotal.io.
Here is a quick teaser of an application using Spring Data REST in Java:
@CrossOrigin
@RepositoryRestResource(path = "people")
public interface PersonRepository extends CrudRepository<Person, Long> {
List<Person> findByLastname(String lastname);
@RestResource(path = "byFirstname")
List<Person> findByFirstnameLike(String firstname);
}
@Configuration
@EnableMongoRepositories
class ApplicationConfig extends AbstractMongoConfiguration {
@Override
public MongoClient mongoClient() {
return new MongoClient();
}
@Override
protected String getDatabaseName() {
return "springdata";
}
}
curl -v "http://localhost:8080/people/search/byFirstname?firstname=Oliver*&sort=name,desc"
Add the Maven dependency:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-rest</artifactId>
<version>${version}.RELEASE</version>
</dependency>
If you’d rather like the latest snapshots of the upcoming major version, use our Maven snapshot repository and declare the appropriate dependency version.
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-rest</artifactId>
<version>${version}.BUILD-SNAPSHOT</version>
</dependency>
<repository>
<id>spring-libs-snapshot</id>
<name>Spring Snapshot Repository</name>
<url>https://repo.spring.io/libs-snapshot</url>
</repository>
Having trouble with Spring Data? We’d love to help!
spring-data-rest
. You can also chat with the community on Gitter.Spring Data uses GitHub as issue tracking system to record bugs and feature requests. If you want to raise an issue, please follow the recommendations below:
You don’t need to build from source to use Spring Data (binaries in repo.spring.io), but if you want to try out the latest and greatest, Spring Data can be easily built with the maven wrapper. You also need JDK 1.8.
$ ./mvnw clean install
If you want to build with the regular mvn
command, you will need Maven v3.5.0 or above.
Also see CONTRIBUTING.adoc if you wish to submit pull requests, and in particular please sign the Contributor’s Agreement before your first non-trivial change.
Building the documentation builds also the project without running tests.
$ ./mvnw clean install -Pdistribute
The generated documentation is available from target/site/reference/html/index.html
.
The spring.io site contains several guides that show how to use Spring Data step-by-step:
Spring Data Examples contains example projects that explain specific features in more detail.
Download Details:
Author: spring-projects
Source Code: https://github.com/spring-projects/spring-data-rest
License: Apache-2.0 License