1551149693
It is a spring application (no spring boot). The database I am using is MySQL. The issue I am having is when saving the entity Driver
which has a Many to one relationship on both Carrier
and Location
.
What I want to do is, when I do the save on Driver. Driver along with Location and Carrier is persisted to the database. The issue I am having is when trying to save. I get duplicate key violation
Stack trace:
org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions WARN: SQL Error: 1062, SQLState: 23000 Feb 18, 2019 1:25:42 PM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions ERROR: Duplicate entry '910327' for key 'UK_lheij6i9eldhfhyu9j1q5fjls' Exception in thread "main" org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; constraint [UK_lheij6i9eldhfhyu9j1q5fjls]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute statement at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:296) at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:253) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.translateExceptionIfPossible(AbstractEntityManagerFactoryBean.java:527) at org.springframework.dao.support.ChainedPersistenceExceptionTranslator.translateExceptionIfPossible(ChainedPersistenceExceptionTranslator.java:61) at org.springframework.dao.support.DataAccessUtils.translateIfNecessary(DataAccessUtils.java:242) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:153) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodInterceptor.invoke(CrudMethodMetadataPostProcessor.java:135) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:61) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) at com.sun.proxy.$Proxy47.saveAll(Unknown Source) at greyhound.service.GreyhoundServiceImpl.process(GreyhoundServiceImpl.java:38) at greyhound.Main.main(Main.java:17) Caused by: org.hibernate.exception.ConstraintViolationException: could not execute statement at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:59) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:99) at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:178) at org.hibernate.dialect.identity.GetGeneratedKeysDelegate.executeAndExtract(GetGeneratedKeysDelegate.java:57) at org.hibernate.id.insert.AbstractReturningDelegate.performInsert(AbstractReturningDelegate.java:42) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3073) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3666) at org.hibernate.action.internal.EntityIdentityInsertAction.execute(EntityIdentityInsertAction.java:81) at org.hibernate.engine.spi.ActionQueue.execute(ActionQueue.java:645) at org.hibernate.engine.spi.ActionQueue.addResolvedEntityInsertAction(ActionQueue.java:282) at org.hibernate.engine.spi.ActionQueue.addInsertAction(ActionQueue.java:263) at org.hibernate.engine.spi.ActionQueue.addAction(ActionQueue.java:317) at org.hibernate.event.internal.AbstractSaveEventListener.addInsertAction(AbstractSaveEventListener.java:332) at org.hibernate.event.internal.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:289) at org.hibernate.event.internal.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:196) at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:127) at org.hibernate.event.internal.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:192) at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:135) at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:828) at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:795) at org.hibernate.engine.spi.CascadingActions$7.cascade(CascadingActions.java:298) at org.hibernate.engine.internal.Cascade.cascadeToOne(Cascade.java:490) at org.hibernate.engine.internal.Cascade.cascadeAssociation(Cascade.java:415) at org.hibernate.engine.internal.Cascade.cascadeProperty(Cascade.java:216) at org.hibernate.engine.internal.Cascade.cascade(Cascade.java:149) at org.hibernate.event.internal.AbstractSaveEventListener.cascadeBeforeSave(AbstractSaveEventListener.java:428) at org.hibernate.event.internal.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:266) at org.hibernate.event.internal.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:196) at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:127) at org.hibernate.event.internal.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:192) at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:135) at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:62) at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:804) at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:789) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:308) at com.sun.proxy.$Proxy44.persist(Unknown Source) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.save(SimpleJpaRepository.java:489) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.saveAll(SimpleJpaRepository.java:521) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.saveAll(SimpleJpaRepository.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:359) at org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:200) at org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:644) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:608) at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.lambda$invoke$3(RepositoryFactorySupport.java:595) at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:595) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:59) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139) ... 11 more Caused by: java.sql.SQLIntegrityConstraintViolationException: Duplicate entry '910327' for key 'UK_lheij6i9eldhfhyu9j1q5fjls' at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:117) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122) at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:970) at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1109) at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1057) at com.mysql.cj.jdbc.ClientPreparedStatement.executeLargeUpdate(ClientPreparedStatement.java:1377) at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdate(ClientPreparedStatement.java:1042) at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:175) ... 69 moreProcess finished with exit code 1
Entity/Model classes: (Have removed getters/setters)
@Entity
@Table(name = “Driver”)
public class Driver {@Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Version @Column(name = "version") private int version; @Column(name = "driver_id") private Long driverId; @Column(name = "first_name") private String firstName; @Column(name = "last_name") private String lastName; @Column(name = "middle_init") private String middleInitial; @ManyToOne(fetch = FetchType.EAGER) @Cascade({CascadeType.ALL}) private Carrier carrier; @ManyToOne(fetch = FetchType.EAGER) @Cascade({CascadeType.ALL}) private Location location;
@Entity
@Table(name=“Carrier”)
public class Carrier {@Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Version @Column(name = "version") private int version; @PrimaryKeyJoinColumn @Column(name = "carrier_name") private String carrierName; @OneToMany @JoinColumn(name = "carrier_id", referencedColumnName = "id")
@Entity
@Table(name=“Locations”)
public class Location {@Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "id") private Long id; @Version private Long version; @Column(name = "location_id") private Long locationId; @Column(name = "location_name") private String locationName; @OneToMany @JoinColumn(name = "location_id", referencedColumnName = "location_id") private List<Driver> drivers = new ArrayList<Driver>();
}
Code preparing the entities
private List<Driver> prepareEntityList(Result result) {
List<Driver> drivers = new ArrayList<Driver>();for(DriverAssignment driverAssignment : result.getDriverAssignments()) { Location location = new Location(); location.setLocationName(driverAssignment.getHomeLocation3()); location.setLocationId(driverAssignment.getHomeLocation()); Carrier carrier = new Carrier(); carrier.setCarrierName(driverAssignment.getCarrierId()); Driver driver = new Driver(); driver.setDriverId(driverAssignment.getDriverId()); driver.setFirstName(driverAssignment.getFirstName()); driver.setLastName(driverAssignment.getLastName()); driver.setMiddleInitial(driverAssignment.getMiddleInitial()); driver.setCarrier(carrier); driver.setLocation(location); drivers.add(driver); } return drivers; }
Question: is it possible to achieve what I am trying to do? Expect hibernate to handle the relationships when I try to save and associate a location
with a driver
if it has already been saved instead of trying to save it again. If not, what is a suggested approach to save these entities?
Datasource configuration
@Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
em.setDataSource(dataSource());
em.setPackagesToScan(new String[] { “greyhound” });JpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter(); em.setJpaVendorAdapter(vendorAdapter); em.setJpaProperties(additionalProperties()); return em; } @Bean public DataSource dataSource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver"); dataSource.setUrl("jdbc:mysql://localhost:3306/greyhound1"); dataSource.setUsername("root"); dataSource.setPassword(""); return dataSource; } @Bean public PlatformTransactionManager transactionManager(EntityManagerFactory emf) { JpaTransactionManager transactionManager = new JpaTransactionManager(); transactionManager.setEntityManagerFactory(emf); return transactionManager; } @Bean public PersistenceExceptionTranslationPostProcessor exceptionTranslation() { return new PersistenceExceptionTranslationPostProcessor(); } Properties additionalProperties() { Properties properties = new Properties(); properties.setProperty("hibernate.hbm2ddl.auto", "create-drop"); properties.setProperty("hibernate.dialect", "org.hibernate.dialect.MySQL5Dialect"); return properties; }
Update #2
Have a DriverRepository like this
@Repository
public interface DriverRepository extends JpaRepository<Driver, Long> {}
To save:
repository.saveAll(drivers);
Github link https://github.com/mukulgoel1989/greyhound
I have added the github link in case someone is willing to give this a try.
#java #spring-boot #hibernate #jpa
1551150863
duplicate problem is causing when it goes to insert entries for both sides. The solution is to mark one side as ‘mappedby’ the other.
so use in Carrier
Class:
@OneToMany(mappedBy="carrier")
@JoinColumn(name = "carrier_id", referencedColumnName = "id")
private List<Driver> drivers = new ArrayList<Driver>();
And in Location
Class:
@OneToMany(mappedBy="location")
@JoinColumn(name = "location_id", referencedColumnName = "location_id")
private List<Driver> drivers = new ArrayList<Driver>();
1551156624
You also need to populate Carrier
and Location
with the crated Driver
:
for(DriverAssignment driverAssignment : result.getDriverAssignments()) {
Location location = new Location();
Carrier carrier = new Carrier();
Driver driver = new Driver();
driver.setCarrier(carrier);
driver.setLocation(location);
// add this
location.getDrivers().add(driver);
carrier.getDrivers().add(driver);
drivers.add(driver);
}
this needs to be done due to the bi-directional mapping you used (@OneToMany
)
Update:
Use JPA cascade config, not Hibernate one:
@ManyToOne(fetch = FetchType.EAGER, cascade = CascadeType.ALL)
private Carrier carrier;
@ManyToOne(fetch = FetchType.EAGER, cascade = CascadeType.ALL)
private Location location;
Update 2
After taking closer look at your project it seems that you are missing a transactional set-up on:
@Transactional
public void process() {
inside that method you are performing many repository operations which is fine as all SimpleJpaRepository methods are transactional.
The problem in my opinion is that not all of these operations are run under same transaction and effectively persistence context. (each operations runs within its own small transaction and after that all the entities are detached from the persistence context).
NOTE: you may need to play with configuration a bit to enable @Transactional annotation set-up.
1614086131
court marriage in pakistan
court marriage in pakistan
family lawyer in pakistan
divorce lawyer in pakistan
online nikah service
divorce procedure for overseas pakistani
court marriage in pakistan
court marriage in faisalabad
court marriage in faisalabad
family lawyer in lahore
divorce lawyer in faisalabad
criminal lawyer in pakistan
civil lawyer in pakistan
court marriage in lahore
court marriage in lahore
online nikah service
family lawyer in faisalabad
divorce lawyer in lahore
corporate lawyer in pakistan
property lawyer in pakistan
court marriage in pakistan
nikah procedure in pakistan
family lawyer
online nikah registration
property lawyer
divorce and khula lawyer in pakistan
children custody lawyer
Samina Shabbir
Islam is peace
contract marriage
Law Books Pakistan
christian marriage in pakistan
1654075127
Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.
Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.
Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.
AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join
, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.
It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE
index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.
1. Advantages of vertical sharding
2. Disadvantages of vertical sharding
Join
can only be implemented by interface aggregation, which will increase the complexity of development.3. Advantages of horizontal sharding
4. Disadvantages of horizontal sharding
Join
is poor.Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.
ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.
The characteristics of Sharding-JDBC are:
Hybrid Structure Integrating Sharding-JDBC and Applications
Sharding-JDBC’s core concepts
Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.
Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.
Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.
Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.
Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.
Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0
version.
git clone
https://github.com/apache/shardingsphere-example.git
Project description:
shardingsphere-example
├── example-core
│ ├── config-utility
│ ├── example-api
│ ├── example-raw-jdbc
│ ├── example-spring-jpa #spring+jpa integration-based entity,repository
│ └── example-spring-mybatis
├── sharding-jdbc-example
│ ├── sharding-example
│ │ ├── sharding-raw-jdbc-example
│ │ ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
│ │ ├── sharding-spring-boot-mybatis-example
│ │ ├── sharding-spring-namespace-jpa-example
│ │ └── sharding-spring-namespace-mybatis-example
│ ├── orchestration-example
│ │ ├── orchestration-raw-jdbc-example
│ │ ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
│ │ └── orchestration-spring-namespace-example
│ ├── transaction-example
│ │ ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
│ │ └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
│ ├── other-feature-example
│ │ ├── hint-example
│ │ └── encrypt-example
├── sharding-proxy-example
│ └── sharding-proxy-boot-mybatis-example
└── src/resources
└── manual_schema.sql
Configuration file description:
application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties #library split profile only
application-sharding-master-slave.properties #sharding and read/write splitting profile
application-sharding-tables.properties #table split profile
application.properties #spring boot profile
Code logic description:
The following is the entry class of the Spring Boot application below. Execute it to run the project.
The execution logic of demo is as follows:
As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint
to meet users' requirements to write and read with strong consistency, and a read-only endpoint
to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog
-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint
.
Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.
ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard
through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.
Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.
application.properties spring boot
Master profile description:
You need to replace the green ones with your own environment configuration.
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-master-slave.properties sharding-jdbc
profile description:
spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true
As shown in the ShardingSphere-SQL log
figure below, the write SQL is executed on the ds_master
data source.
As shown in the ShardingSphere-SQL log
figure below, the read SQL is executed on the ds_slave
data source in the form of polling.
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_,
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_,
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1
Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.
@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
System.out.println("-------------- Process Success Begin ---------------");
List<Long> orderIds = insertData();
printData();
deleteData(orderIds);
printData();
System.out.println("-------------- Process Success Finish --------------");
}
The Aurora database environment adopts the configuration described in Section 2.2.1.
3.2.4.1 Verification process description
Spring-Boot
project2. Perform a failover on Aurora’s console
3. Execute the Rest API
request
4. Repeatedly execute POST
(http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.
5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-tables.properties sharding-jdbc
profile description
## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client
executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address
is a broadcast table, create a t_address
because there is only one master instance. Two physical tables t_order_0
and t_order_1
will be created when creating t_order
.
2. Write operation
As shown in the figure below, Logic SQL
inserts a record into t_order
. When Sharding-JDBC is executed, data will be distributed to t_order_0
and t_order_1
according to the table splitting rules.
When t_order
and t_order_item
are bound, the records associated with order_item
and order
are placed on the same physical table.
3. Read operation
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
The join
query operations on order
and order_item
under the unbound table will traverse all shards.
Create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, tables t_order
, t_order_item
,t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, physical tables will be created on ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.
3. Read operation
Query order
is routed to the corresponding Aurora instance according to the routing rules of the slave library .
Query Address
. Since address
is a broadcast table, an instance of address
will be randomly selected and queried from the nodes used.
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
As shown in the figure below, create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, physical tables t_order_01
, t_order_02
, t_order_item_01
,and t_order_item_02
and global table t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The read operation is similar to the library split function verification described in section2.4.3.
The following figure shows the physical table of the created database instance.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave
application-sharding-master-slave.properties sharding-jdbc
profile description
The url, name and password of the database need to be changed to your own database parameters.
spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username=
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username=
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username=
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url=
spring.shardingsphere.datasource.ds_master_1.username=
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The join
query operations on order
and order_item
under the binding table are shown below.
3. Conclusion
As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.
Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.
However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.
In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.
Author
Sun Jinhua
A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.
1659500100
Form objects decoupled from your models.
Reform gives you a form object with validations and nested setup of models. It is completely framework-agnostic and doesn't care about your database.
Although reform can be used in any Ruby framework, it comes with Rails support, works with simple_form and other form gems, allows nesting forms to implement has_one and has_many relationships, can compose a form from multiple objects and gives you coercion.
Reform is part of the Trailblazer framework. Full documentation is available on the project site.
Temporary note: Reform 2.2 does not automatically load Rails files anymore (e.g. ActiveModel::Validations
). You need the reform-rails
gem, see Installation.
Forms are defined in separate classes. Often, these classes partially map to a model.
class AlbumForm < Reform::Form
property :title
validates :title, presence: true
end
Fields are declared using ::property
. Validations work exactly as you know it from Rails or other frameworks. Note that validations no longer go into the model.
Forms have a ridiculously simple API with only a handful of public methods.
#initialize
always requires a model that the form represents.#validate(params)
updates the form's fields with the input data (only the form, not the model) and then runs all validations. The return value is the boolean result of the validations.#errors
returns validation messages in a classic ActiveModel style.#sync
writes form data back to the model. This will only use setter methods on the model(s).#save
(optional) will call #save
on the model and nested models. Note that this implies a #sync
call.#prepopulate!
(optional) will run pre-population hooks to "fill out" your form before rendering.In addition to the main API, forms expose accessors to the defined properties. This is used for rendering or manual operations.
In your controller or operation you create a form instance and pass in the models you want to work on.
class AlbumsController
def new
@form = AlbumForm.new(Album.new)
end
This will also work as an editing form with an existing album.
def edit
@form = AlbumForm.new(Album.find(1))
end
Reform will read property values from the model in setup. In our example, the AlbumForm
will call album.title
to populate the title
field.
Your @form
is now ready to be rendered, either do it yourself or use something like Rails' #form_for
, simple_form
or formtastic
.
= form_for @form do |f|
= f.input :title
Nested forms and collections can be easily rendered with fields_for
, etc. Note that you no longer pass the model to the form builder, but the Reform instance.
Optionally, you might want to use the #prepopulate!
method to pre-populate fields and prepare the form for rendering.
After form submission, you need to validate the input.
class SongsController
def create
@form = SongForm.new(Song.new)
#=> params: {song: {title: "Rio", length: "366"}}
if @form.validate(params[:song])
The #validate
method first updates the values of the form - the underlying model is still treated as immutuable and remains unchanged. It then runs all validations you provided in the form.
It's the only entry point for updating the form. This is per design, as separating writing and validation doesn't make sense for a form.
This allows rendering the form after validate
with the data that has been submitted. However, don't get confused, the model's values are still the old, original values and are only changed after a #save
or #sync
operation.
After validation, you have two choices: either call #save
and let Reform sort out the rest. Or call #sync
, which will write all the properties back to the model. In a nested form, this works recursively, of course.
It's then up to you what to do with the updated models - they're still unsaved.
The easiest way to save the data is to call #save
on the form.
if @form.validate(params[:song])
@form.save #=> populates album with incoming data
# by calling @form.album.title=.
else
# handle validation errors.
end
This will sync the data to the model and then call album.save
.
Sometimes, you need to do saving manually.
Reform allows default values to be provided for properties.
class AlbumForm < Reform::Form
property :price_in_cents, default: 9_95
end
Calling #save
with a block will provide a nested hash of the form's properties and values. This does not call #save
on the models and allows you to implement the saving yourself.
The block parameter is a nested hash of the form input.
@form.save do |hash|
hash #=> {title: "Greatest Hits"}
Album.create(hash)
end
You can always access the form's model. This is helpful when you were using populators to set up objects when validating.
@form.save do |hash|
album = @form.model
album.update_attributes(hash[:album])
end
Reform provides support for nested objects. Let's say the Album
model keeps some associations.
class Album < ActiveRecord::Base
has_one :artist
has_many :songs
end
The implementation details do not really matter here, as long as your album exposes readers and writes like Album#artist
and Album#songs
, this allows you to define nested forms.
class AlbumForm < Reform::Form
property :title
validates :title, presence: true
property :artist do
property :full_name
validates :full_name, presence: true
end
collection :songs do
property :name
end
end
You can also reuse an existing form from elsewhere using :form
.
property :artist, form: ArtistForm
Reform will wrap defined nested objects in their own forms. This happens automatically when instantiating the form.
album.songs #=> [<Song name:"Run To The Hills">]
form = AlbumForm.new(album)
form.songs[0] #=> <SongForm model: <Song name:"Run To The Hills">>
form.songs[0].name #=> "Run To The Hills"
When rendering a nested form you can use the form's readers to access the nested forms.
= text_field :title, @form.title
= text_field "artist[name]", @form.artist.name
Or use something like #fields_for
in a Rails environment.
= form_for @form do |f|
= f.text_field :title
= f.fields_for :artist do |a|
= a.text_field :name
validate
will assign values to the nested forms. sync
and save
work analogue to the non-nested form, just in a recursive way.
The block form of #save
would give you the following data.
@form.save do |nested|
nested #=> {title: "Greatest Hits",
# artist: {name: "Duran Duran"},
# songs: [{title: "Hungry Like The Wolf"},
# {title: "Last Chance On The Stairways"}]
# }
end
The manual saving with block is not encouraged. You should rather check the Disposable docs to find out how to implement your manual tweak with the official API.
Very often, you need to give Reform some information how to create or find nested objects when validate
ing. This directive is called populator and documented here.
Add this line to your Gemfile:
gem "reform"
Reform works fine with Rails 3.1-5.0. However, inheritance of validations with ActiveModel::Validations
is broken in Rails 3.2 and 4.0.
Since Reform 2.2, you have to add the reform-rails
gem to your Gemfile
to automatically load ActiveModel/Rails files.
gem "reform-rails"
Since Reform 2.0 you need to specify which validation backend you want to use (unless you're in a Rails environment where ActiveModel will be used).
To use ActiveModel (not recommended because very out-dated).
require "reform/form/active_model/validations"
Reform::Form.class_eval do
include Reform::Form::ActiveModel::Validations
end
To use dry-validation (recommended).
require "reform/form/dry"
Reform::Form.class_eval do
feature Reform::Form::Dry
end
Put this in an initializer or on top of your script.
Reform allows to map multiple models to one form. The complete documentation is here, however, this is how it works.
class AlbumForm < Reform::Form
include Composition
property :id, on: :album
property :title, on: :album
property :songs, on: :cd
property :cd_id, on: :cd, from: :id
end
When initializing a composition, you have to pass a hash that contains the composees.
AlbumForm.new(album: album, cd: CD.find(1))
Reform comes many more optional features, like hash fields, coercion, virtual fields, and so on. Check the full documentation here.
Reform is part of the Trailblazer project. Please buy my book to support the development and learn everything about Reform - there's two chapters dedicated to Reform!
By explicitly defining the form layout using ::property
there is no more need for protecting from unwanted input. strong_parameter
or attr_accessible
become obsolete. Reform will simply ignore undefined incoming parameters.
Temporary note: This is the README and API for Reform 2. On the public API, only a few tiny things have changed. Here are the Reform 1.2 docs.
Anyway, please upgrade and report problems and do not simply assume that we will magically find out what needs to get fixed. When in trouble, join us on Gitter.
Full documentation for Reform is available online, or support us and grab the Trailblazer book. There is an Upgrading Guide to help you migrate through versions.
Great thanks to Blake Education for giving us the freedom and time to develop this project in 2013 while working on their project.
Author: trailblazer
Source code: https://github.com/trailblazer/reform
License: MIT license
1551149693
It is a spring application (no spring boot). The database I am using is MySQL. The issue I am having is when saving the entity Driver
which has a Many to one relationship on both Carrier
and Location
.
What I want to do is, when I do the save on Driver. Driver along with Location and Carrier is persisted to the database. The issue I am having is when trying to save. I get duplicate key violation
Stack trace:
org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions WARN: SQL Error: 1062, SQLState: 23000 Feb 18, 2019 1:25:42 PM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions ERROR: Duplicate entry '910327' for key 'UK_lheij6i9eldhfhyu9j1q5fjls' Exception in thread "main" org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; constraint [UK_lheij6i9eldhfhyu9j1q5fjls]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute statement at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:296) at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:253) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.translateExceptionIfPossible(AbstractEntityManagerFactoryBean.java:527) at org.springframework.dao.support.ChainedPersistenceExceptionTranslator.translateExceptionIfPossible(ChainedPersistenceExceptionTranslator.java:61) at org.springframework.dao.support.DataAccessUtils.translateIfNecessary(DataAccessUtils.java:242) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:153) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodInterceptor.invoke(CrudMethodMetadataPostProcessor.java:135) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:61) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) at com.sun.proxy.$Proxy47.saveAll(Unknown Source) at greyhound.service.GreyhoundServiceImpl.process(GreyhoundServiceImpl.java:38) at greyhound.Main.main(Main.java:17) Caused by: org.hibernate.exception.ConstraintViolationException: could not execute statement at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:59) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:99) at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:178) at org.hibernate.dialect.identity.GetGeneratedKeysDelegate.executeAndExtract(GetGeneratedKeysDelegate.java:57) at org.hibernate.id.insert.AbstractReturningDelegate.performInsert(AbstractReturningDelegate.java:42) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3073) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3666) at org.hibernate.action.internal.EntityIdentityInsertAction.execute(EntityIdentityInsertAction.java:81) at org.hibernate.engine.spi.ActionQueue.execute(ActionQueue.java:645) at org.hibernate.engine.spi.ActionQueue.addResolvedEntityInsertAction(ActionQueue.java:282) at org.hibernate.engine.spi.ActionQueue.addInsertAction(ActionQueue.java:263) at org.hibernate.engine.spi.ActionQueue.addAction(ActionQueue.java:317) at org.hibernate.event.internal.AbstractSaveEventListener.addInsertAction(AbstractSaveEventListener.java:332) at org.hibernate.event.internal.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:289) at org.hibernate.event.internal.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:196) at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:127) at org.hibernate.event.internal.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:192) at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:135) at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:828) at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:795) at org.hibernate.engine.spi.CascadingActions$7.cascade(CascadingActions.java:298) at org.hibernate.engine.internal.Cascade.cascadeToOne(Cascade.java:490) at org.hibernate.engine.internal.Cascade.cascadeAssociation(Cascade.java:415) at org.hibernate.engine.internal.Cascade.cascadeProperty(Cascade.java:216) at org.hibernate.engine.internal.Cascade.cascade(Cascade.java:149) at org.hibernate.event.internal.AbstractSaveEventListener.cascadeBeforeSave(AbstractSaveEventListener.java:428) at org.hibernate.event.internal.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:266) at org.hibernate.event.internal.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:196) at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:127) at org.hibernate.event.internal.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:192) at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:135) at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:62) at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:804) at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:789) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:308) at com.sun.proxy.$Proxy44.persist(Unknown Source) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.save(SimpleJpaRepository.java:489) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.saveAll(SimpleJpaRepository.java:521) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.saveAll(SimpleJpaRepository.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:359) at org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:200) at org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:644) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:608) at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.lambda$invoke$3(RepositoryFactorySupport.java:595) at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:595) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:59) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139) ... 11 more Caused by: java.sql.SQLIntegrityConstraintViolationException: Duplicate entry '910327' for key 'UK_lheij6i9eldhfhyu9j1q5fjls' at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:117) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122) at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:970) at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1109) at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1057) at com.mysql.cj.jdbc.ClientPreparedStatement.executeLargeUpdate(ClientPreparedStatement.java:1377) at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdate(ClientPreparedStatement.java:1042) at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:175) ... 69 moreProcess finished with exit code 1
Entity/Model classes: (Have removed getters/setters)
@Entity
@Table(name = “Driver”)
public class Driver {@Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Version @Column(name = "version") private int version; @Column(name = "driver_id") private Long driverId; @Column(name = "first_name") private String firstName; @Column(name = "last_name") private String lastName; @Column(name = "middle_init") private String middleInitial; @ManyToOne(fetch = FetchType.EAGER) @Cascade({CascadeType.ALL}) private Carrier carrier; @ManyToOne(fetch = FetchType.EAGER) @Cascade({CascadeType.ALL}) private Location location;
@Entity
@Table(name=“Carrier”)
public class Carrier {@Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Version @Column(name = "version") private int version; @PrimaryKeyJoinColumn @Column(name = "carrier_name") private String carrierName; @OneToMany @JoinColumn(name = "carrier_id", referencedColumnName = "id")
@Entity
@Table(name=“Locations”)
public class Location {@Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "id") private Long id; @Version private Long version; @Column(name = "location_id") private Long locationId; @Column(name = "location_name") private String locationName; @OneToMany @JoinColumn(name = "location_id", referencedColumnName = "location_id") private List<Driver> drivers = new ArrayList<Driver>();
}
Code preparing the entities
private List<Driver> prepareEntityList(Result result) {
List<Driver> drivers = new ArrayList<Driver>();for(DriverAssignment driverAssignment : result.getDriverAssignments()) { Location location = new Location(); location.setLocationName(driverAssignment.getHomeLocation3()); location.setLocationId(driverAssignment.getHomeLocation()); Carrier carrier = new Carrier(); carrier.setCarrierName(driverAssignment.getCarrierId()); Driver driver = new Driver(); driver.setDriverId(driverAssignment.getDriverId()); driver.setFirstName(driverAssignment.getFirstName()); driver.setLastName(driverAssignment.getLastName()); driver.setMiddleInitial(driverAssignment.getMiddleInitial()); driver.setCarrier(carrier); driver.setLocation(location); drivers.add(driver); } return drivers; }
Question: is it possible to achieve what I am trying to do? Expect hibernate to handle the relationships when I try to save and associate a location
with a driver
if it has already been saved instead of trying to save it again. If not, what is a suggested approach to save these entities?
Datasource configuration
@Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
em.setDataSource(dataSource());
em.setPackagesToScan(new String[] { “greyhound” });JpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter(); em.setJpaVendorAdapter(vendorAdapter); em.setJpaProperties(additionalProperties()); return em; } @Bean public DataSource dataSource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver"); dataSource.setUrl("jdbc:mysql://localhost:3306/greyhound1"); dataSource.setUsername("root"); dataSource.setPassword(""); return dataSource; } @Bean public PlatformTransactionManager transactionManager(EntityManagerFactory emf) { JpaTransactionManager transactionManager = new JpaTransactionManager(); transactionManager.setEntityManagerFactory(emf); return transactionManager; } @Bean public PersistenceExceptionTranslationPostProcessor exceptionTranslation() { return new PersistenceExceptionTranslationPostProcessor(); } Properties additionalProperties() { Properties properties = new Properties(); properties.setProperty("hibernate.hbm2ddl.auto", "create-drop"); properties.setProperty("hibernate.dialect", "org.hibernate.dialect.MySQL5Dialect"); return properties; }
Update #2
Have a DriverRepository like this
@Repository
public interface DriverRepository extends JpaRepository<Driver, Long> {}
To save:
repository.saveAll(drivers);
Github link https://github.com/mukulgoel1989/greyhound
I have added the github link in case someone is willing to give this a try.
#java #spring-boot #hibernate #jpa
1637850591
1641474122
Spring Data JPA, part of the larger Spring Data family, makes it easy to easily implement JPA based repositories. This module deals with enhanced support for JPA based data access layers. It makes it easier to build Spring-powered applications that use data access technologies.
Implementing a data access layer of an application has been cumbersome for quite a while. Too much boilerplate code has to be written to execute simple queries as well as perform pagination, and auditing. Spring Data JPA aims to significantly improve the implementation of data access layers by reducing the effort to the amount that’s actually needed. As a developer you write your repository interfaces, including custom finder methods, and Spring will provide the implementation automatically.
This project is governed by the Spring Code of Conduct. By participating, you are expected to uphold this code of conduct. Please report unacceptable behavior to spring-code-of-conduct@pivotal.io.
Here is a quick teaser of an application using Spring Data Repositories in Java:
public interface PersonRepository extends CrudRepository<Person, Long> {
List<Person> findByLastname(String lastname);
List<Person> findByFirstnameLike(String firstname);
}
@Service
public class MyService {
private final PersonRepository repository;
public MyService(PersonRepository repository) {
this.repository = repository;
}
public void doWork() {
repository.deleteAll();
Person person = new Person();
person.setFirstname("Oliver");
person.setLastname("Gierke");
repository.save(person);
List<Person> lastNameResults = repository.findByLastname("Gierke");
List<Person> firstNameResults = repository.findByFirstnameLike("Oli*");
}
}
@Configuration
@EnableJpaRepositories("com.acme.repositories")
class AppConfig {
@Bean
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.H2).build();
}
@Bean
public JpaTransactionManager transactionManager(EntityManagerFactory emf) {
return new JpaTransactionManager(emf);
}
@Bean
public JpaVendorAdapter jpaVendorAdapter() {
HibernateJpaVendorAdapter jpaVendorAdapter = new HibernateJpaVendorAdapter();
jpaVendorAdapter.setDatabase(Database.H2);
jpaVendorAdapter.setGenerateDdl(true);
return jpaVendorAdapter;
}
@Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
LocalContainerEntityManagerFactoryBean lemfb = new LocalContainerEntityManagerFactoryBean();
lemfb.setDataSource(dataSource());
lemfb.setJpaVendorAdapter(jpaVendorAdapter());
lemfb.setPackagesToScan("com.acme");
return lemfb;
}
}
Add the Maven dependency:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-jpa</artifactId>
<version>${version}.RELEASE</version>
</dependency>
If you’d rather like the latest snapshots of the upcoming major version, use our Maven snapshot repository and declare the appropriate dependency version.
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-jpa</artifactId>
<version>${version}.BUILD-SNAPSHOT</version>
</dependency>
<repository>
<id>spring-libs-snapshot</id>
<name>Spring Snapshot Repository</name>
<url>https://repo.spring.io/libs-snapshot</url>
</repository>
Having trouble with Spring Data? We’d love to help!
spring-data-jpa
. You can also chat with the community on Gitter.Spring Data uses GitHub as issue tracking system to record bugs and feature requests. If you want to raise an issue, please follow the recommendations below:
You don’t need to build from source to use Spring Data (binaries in repo.spring.io), but if you want to try out the latest and greatest, Spring Data can be easily built with the maven wrapper. You also need JDK 1.8.
$ ./mvnw clean install
If you want to build with the regular mvn
command, you will need Maven v3.5.0 or above.
Also see CONTRIBUTING.adoc if you wish to submit pull requests, and in particular please sign the Contributor’s Agreement before your first non-trivial change.
Building the documentation builds also the project without running tests.
$ ./mvnw clean install -Pdistribute
The generated documentation is available from target/site/reference/html/index.html
.
The spring.io site contains several guides that show how to use Spring Data step-by-step:
Accessing Data with JPA: Learn how to work with JPA data persistence using Spring Data JPA.
Accessing JPA Data with REST is a guide to creating a REST web service exposing data stored with JPA through repositories.
Spring Data Examples contains example projects that explain specific features in more detail.
Download Details:
Author: spring-projects
Source Code: https://github.com/spring-projects/spring-data-jpa
License: Apache-2.0 License