Storm is a SQL Template and ORM framework designed for Java 21 and later versions, focusing on modernizing and simplifying database programming. By leveraging the latest features of Java, it enables developers to define entities and queries in a concise and readable manner, enhancing both productivity and code clarity.
Key benefits of Storm:
-
Easy to learn: With a programming model similar to the Java Persistence API (JPA), developers familiar with JPA can quickly and easily adapt to using Storm.
-
Modern Syntax: Storm allows for clean and concise code, making it effortless to write entities and queries.
-
Type-safe: The best DSL is no DSL. Storm’s query builder mirrors SQL, providing a type-safe, intuitive experience that makes queries easy to write and read while reducing the risk of runtime errors.
-
Direct Database Interaction: Storm translates method calls directly into database operations, offering a transparent and straightforward experience. It eliminates inefficiencies like the N+1 query problem for predictable and efficient interactions.
-
Stateless: Avoids hidden complexities and “magic” with stateless, record-based entities, ensuring simplicity and eliminating lazy initialization and transaction issues downstream.
-
Performance: Built with efficiency in mind, Storm supports batch processing, lazy streams, and upsert functionality to enhance performance during database interactions.
-
Universal Database Compatibility: Fully compatible with all SQL databases, it offers flexibility and broad applicability across various database systems.
In summary, Storm delivers a modern, efficient, and straightforward ORM solution that prioritizes ease of use, direct database interaction, and wide compatibility. It’s an excellent choice for Java and Kotlin developers seeking simplicity, performance, and enhanced readability in their database operations.
Storm offers a flexible and layered approach to database interaction, catering to developers with varying needs and preferences. Whether you’re looking for minimal enhancements or a complete abstraction, Storm has you covered.
Include the following dependency in your project to start using Storm:
- Maven
-
<dependency> <groupId>st.orm</groupId> <artifactId>storm</artifactId> <version>1.3.2</version> <scope>compile</scope> </dependency>
- Gradle
-
implementation 'st.orm:storm:1.3.2'
Storm entities are defined using Java record classes. By default, Storm automatically applies a naming scheme to map entity fields directly to corresponding database columns. This approach simplifies development, as explicit annotations or mappings are not required for standard naming conventions.
Consider the following example, where two entities are defined:
-
The
City
entity maps directly to the columns:id
,name
, andpopulation
. -
The
User
entity maps directly to the columns:id
,email
,birth_date
,street
,postal_code
andcity_id
.
When an entity references another entity, Storm applies a default naming convention to handle foreign keys. For instance, the city field in the User entity automatically maps to the city_id column in the database, creating a foreign key relationship referencing the primary key of the City entity.
record City(@PK int id,
String name,
long population
) implements Entity<City, Integer> {}
record User(@PK int id,
String email,
LocalDate birthDate,
String street,
String postalCode,
@FK City city
) implements Entity<User, Integer> {}
Implementing the Entity
interface is optional, but required when using EntityRepository
to leverage built-in CRUD
operations.
In Storm, all fields are nullable by default, except for the primary key field, which is always non-nullable. To
explicitly mark a field as non-nullable, use the @Nonnull
annotation. When a non-nullable field is of a primitive
wrapper type (e.g., Integer
, Long
), it can alternatively be specified as its primitive counterpart (int
, long
),
inherently enforcing non-nullability.
Storm automatically validates nullability constraints when interacting with the database, throwing an exception if a
non-nullable field is found to be null
.
Additionally, it is beneficial to use the optional @Nullable
annotation, as it enables automatic null-checking support
within your IDE.
record User(@PK int id,
@Nonnull String email,
@Nonnull LocalDate birthDate,
@Nonnull String street,
String postalCode,
@Nullable @FK City city
) implements Entity<User, Integer> {}
In this example, the fields id
, email
, birthDate
, and street
are marked as non-nullable, whereas the
postalCode
and city
fields are nullable. Consequently, the relationship between the User
and City
entities
results in a left join, accommodating potential null
values for the city
field. Conversely, a non-nullable foreign
key would lead to an inner join, ensuring the referenced entity must always exist.
Storm provides built-in support for enumerations, simplifying how predefined sets of values are handled in entities. By
default, enumerations (enum
types) are stored in the database using their names. However, Storm allows you to customize
this persistence behavior using the @DbEnum
annotation, enabling storage as either the enum’s NAME
(default) or
ORDINAL
(integer index). This flexibility facilitates seamless integration with existing database schemas or preferred
storage formats.
enum RoleType {
USER,
ADMIN
}
record Role(@PK int id,
@Nonnull String name,
@Nonnull RoleType type
) implements Entity<Integer> {}
In this example, the RoleType enumeration values (USER
and ADMIN
) are persisted directly by their names
("USER" and "ADMIN") in the database.
record Role(@PK int id,
@Nonnull String name,
@Nonnull @DbEnum(ORDINAL) RoleType type
) implements Entity<Integer> {}
In the second example, @DbEnum(ORDINAL)
instructs Storm to persist the RoleType enumeration using its ordinal value
(integer index) instead of its name.
While Storm’s default naming conventions simplify entity definitions, custom column names or foreign key mappings can easily be accommodated. Developers can customize the mapping of entity fields to database columns using various approaches:
-
Annotate the record with the
@DbTable
annotation to define custom table names. -
Annotate fields with the
@DbColumn
annotation to define custom column names or specify a custom name in@PK
or@FK
annotations. -
Providing implementations of the
TableNameResolver
,ColumnNameResolver
orForeignKeyResolver
interfaces to globally manage naming conventions.
This flexible approach enables developers to easily adapt Storm entities to existing database schemas or specific naming preferences.
The Storm API provides a powerful and flexible way to query entities. It supports both SQL Template mode and ORM mode, allowing developers to choose the approach that best fits their needs.
The following example demonstrates how to query the User
entity using both SQL Template mode and ORM mode. The query
fetches users based on the specified email address. The email address is passed as a bind variable to the underlying SQL
query in all modes.
- ORM
-
Optional<User> user = ORM(dataSource).entity(User.class) .select() .where(User_.email, EQUALS, email) // Type-safe! .getOptionalResult();
- SQL Template
-
Optional<User> user = ORM(dataSource).query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{User_.email} = \{email}""") .getOptionalResult(User.class);
- Hybrid
-
Optional<User> user = ORM(dataSource).entity(User.class) .select() .where(RAW."\{User_.email} = \{email}") .getOptionalResult();
ORM mode should generally be preferred for its type-safe, readable syntax and portability. However, SQL Template mode is also available for those who prefer a more SQL-like approach or need to execute complex queries that may not be easily expressed in code. All ORM methods also support SQL Templates in a hybrid fashion, allowing for a seamless transition between ORM and SQL Template modes.
Storm supports one-to-one and many-to-one relationships through the use of the @FK
annotation. This annotation
allows you to define foreign key relationships between entities. For example, in the User
entity, the city
field is
annotated with @FK
, indicating that it references the City
entity. This establishes a foreign key relationship
between the two entities. Foreign keys are automatically loaded as part of the entity graph, allowing you to navigate
relationships easily. The entity graph is always loaded in a single query, eliminating the need for multiple queries to
fetch related entities.
When one-to-many relationships need to be queried, a query can be constructed to fetch the related entities. For example, to fetch all users in a specific city, you can use the following approaches:
- ORM
-
List<User> usersInCity = ORM(dataSource).entity(User.class) .select() .where(User_.city, EQUALS, city) // Type-safe! .getResultList();
- SQL Template
-
List<User> usersInCity = ORM(dataSource).query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{city}""") .getResultList(User.class);
- Hybrid
-
List<User> usersInCity = ORM(dataSource).entity(User.class) .select() .where(RAW."\{city}") .getResultList();
For many-to-many relationships a join table is required. The join table can be represented as a separate entity, and
the relationship can be defined using the @FK
annotation. For example, consider the following entities:
record UserRolePk(int userId, int roleId) {}
record UserRole(@PK UserRolePk userRolePk,
@Nonnull @FK User user,
@Nonnull @FK Role role
) implements Entity<UserRolePk> {}
The UserRole
entity represents the join table between User
and Role
. The userRolePk
field is a composite primary
key that consists of the user ID and role ID. The user
and role
fields are foreign keys that reference the User
and Role
entities, respectively.
- ORM
-
List<UserRole> userRoles = ORM(dataSource).entity(UserRole.class) .select() .where(UserRole_.role, EQUALS, role) // Type-safe! .getResultList();
- SQL Template
-
List<UserRole> userRoles = ORM(dataSource).query(RAW.""" SELECT \{UserRole.class} FROM \{UserRole.class} WHERE \{role}""") .getResultList(UserRole.class);
- Hybrid
-
List<UserRole> userRoles = ORM(dataSource).entity(UserRole.class) .select() .where(RAW."\{role}") .getResultList();
Alternatively, you can use the UserRole
entity to fetch users or roles associated with a specific user or role. For
example, to fetch all users associated with a specific role, you can use the following approaches using join tables:
- ORM
-
List<Role> roles = ORM(dataSource).entity(Role.class) .select() .innerJoin(UserRole.class).on(Role.class) .where(UserRole_.user, EQUALS, user) // Type-safe! .getResultList();
- SQL Template
-
List<Role> roles = ORM(dataSource).query(RAW.""" SELECT \{Role.class} FROM \{Role.class} INNER JOIN \{UserRole.class} ON \{UserRole_.role} = \{Role_.id} WHERE \{UserRole_.user} = \{user.id()}""") .getResultList(Role.class);
- Hybrid
-
List<Role> roles = ORM(dataSource).entity(Role.class) .select() .innerJoin(UserRole.class).on(Role.class) .where(RAW."\{UserRole_.user} = \{user.id()}") .getResultList();
Storm supports filtering results using the where
method. This allows you to specify conditions for filtering
results based on specific fields. The following example demonstrates how to build a where clause using multiple
conditions:
- ORM
-
List<User> users = ORM(dataSource).entity(User.class) .select() .where(it -> it.where(User_.city, EQUALS, city) .and(it.where(User_.birthDate, LESS_THAN, LocalDate.of(2000, 1, 1)))) .getResultList();
- SQL Template
-
List<User> users = ORM(dataSource).query(RAW.""" SELECT \{User.class} FROM \{User.class} WHERE \{city} AND \{User_.birthDate} < \{LocalDate.of(2000, 1, 1)}""") .getResultList(User.class);
- Hybrid
-
List<User> users = ORM(dataSource).entity(User.class) .select() .where(RAW."\{city} AND \{User_.birthDate} < \{LocalDate.of(2000, 1, 1)}") .getResultList();
Storm supports aggregating results using the groupBy
method. This allows you to group results based on specific fields
and perform aggregate functions like COUNT
, SUM
, AVG
, etc.
record GroupedByCity(City city, long count) {}
The GroupedByCity
can be a local record or a top-level class. The example below shows how to use the groupBy
method
to group users by city and count the number of users in each city:
- ORM
-
List<GroupedByCity> counts = ORM(dataSource).entity(User.class) .select(GroupedByCity.class, RAW."\{City.class}, COUNT(*)") .groupBy(User_.city) .getResultList();
- SQL Template
-
List<GroupedByCity> counts = ORM(dataSource).query(RAW.""" SELECT \{City.class}, COUNT(*) FROM \{User.class} GROUP BY \{User_.city}""") .getResultList(GroupedByCity.class);
- Hybrid
-
List<GroupedByCity> counts = ORM(dataSource).entity(User.class) .select(GroupedByCity.class, RAW."\{City.class}, COUNT(*)") .groupBy(RAW."\{User_.city}") .getResultList();
The GroupedByCity
record is used to represent the result of the aggregation. The select
method specifies the
columns to be selected, and the groupBy
method specifies the field to group by. The result is a list of
GroupedByCity
records, each containing a City
object and the count of users in that city. Additionally, a having
clause can be added by using the having
method.
Storm supports ordering results using the orderBy
method. This allows you to specify the order in which results should
be returned. The following example demonstrates how to order users by their birth date in ascending order:
- ORM
-
List<User> users = ORM(dataSource).entity(User.class) .select() .orderBy(User_.birthDate) .getResultList();
- SQL Template
-
List<User> users = ORM(dataSource).query(RAW.""" SELECT \{User.class} FROM \{User.class} ORDER BY \{User_.birthDate}""") .getResultList(User.class);
- Hybrid
-
List<User> users = ORM(dataSource).entity(User.class) .select() .orderBy(RAW."\{User_.birthDate}") .getResultList();
The orderBy
method specifies the field to order by. You can also specify the order direction (ascending or
descending), or order by multiple fields by using the SQL Template version of the orderBy
method.
Entity repositories provide a high-level abstraction for managing entities in the database. They offer a set of methods
for creating, reading, updating, and deleting entities, as well as querying and filtering entities based on specific
criteria. The EntityRepository
interface is designed to work with entity records that implement the Entity
interface, providing a consistent and type-safe way to interact with the database.
An entity repository can be obtained by invoking entity
on an ORMTemplate
with the desired entity class. The orm
template can be requested as demonstrated below. Note that orm templates are supported for Data Sources,
JDBC Connections and JPA Entity Managers.
ORMTemplate orm = ORM(dataSource);
EntityRepository<User> userRepository = orm.entity(User.class);
Alternatively, a specialized repository can be requested by calling the repository
method with the repository class.
Specialized repositories allow specialized repository methods to be defined in the repository interface. The specialized
repository can be used to implement specialized queries or operations that are specific to the entity type. The custom
logic can utilize the QueryBuilder
interface to build SELECT and DELETE statements.
- ORM
-
interface UserRepository extends EntityRepository<User> { // CRUD operations for User are inherited from EntityRepository. // Specialized repository methods go here. Example: default Optional<User> findByEmail(String email) { return select() .where(User_.email, EQUALS, email) .getOptionalResult(); } }
Specialized entity repositories can be retrieved using the repository
method, which accepts the repository class as an
argument.
UserRepository userRepository = orm.repository(UserRepository.class);
Refs are a powerful feature provided by Storm for efficiently managing entity relationships. A Ref serves as a lightweight identifier for the referenced entity, deferring the fetching of entity data until explicitly required. This approach effectively handles large object graphs and optimizes database performance by avoiding unnecessary data retrieval. Refs are particularly useful in scenarios where you want to:
-
Represent foreign key relationships without immediately fetching the referenced entity.
-
Optimize performance by reducing memory usage when full entity details are not required.
-
Efficiently use entities as keys in hash-based data structures.
Refs allow the inclusion of related entities in the object graph without preloading them. When you include a Ref to an
entity, it doesn’t immediately load the referenced entity. Instead, the data is fetched only when you explicitly call
fetch()
on the Ref
. This behavior reduces unnecessary database operations, improving application performance. The
primary key of the referenced entity is available in the Ref and can be obtained using the id()
method.
record User(@PK int id,
String email,
LocalDate birthDate,
String street,
String postalCode,
@FK Ref<City> city
) implements Entity<User, Integer> {}
Another significant advantage of using Refs is to prevent circular dependencies within your object graphs. By using Refs, you explicitly control when and how each part of the object graph is loaded, effectively preventing circular dependencies.
record User(@PK int id,
String email,
LocalDate birthDate,
String street,
String postalCode,
@FK City city,
@FK Ref<User> invitedBy
) implements Entity<User, Integer> {}
In this example, the invitedBy
field is a Ref to another User entity. The Ref represents a nullable field. When the
underlying database field is null, it is set to the Ref.ofNull()
instance. The null state of the Ref
can be checked
by calling its isNull()
method.
Refs also help minimize memory usage and data retrieval. They store only the entity type and primary key information until explicitly fetched, making them highly efficient in terms of memory footprint. This is particularly useful when dealing with large datasets or when entities are primarily needed as keys in collections such as hash maps or sets.
- ORM
-
Role role = ...; List<Ref<User>> users = ORM(dataSource).entity(UserRole.class) .selectRef(User.class) .where(UserRole_.role, role) .getResultList();
- SQL Template
-
Role role = ...; List<Ref<User>> users = ORM(dataSource).query(RAW.""" SELECT \{select(User.class, SelectMode.PK)} FROM \{UserRole.class} WHERE \{role}""") .getRefList(User.class, Integer.class);
- Hybrid
-
List<Ref<User>> users = ORM(dataSource).entity(UserRole.class) .selectRef(User.class) .where(RAW."\{role}") .getResultList();
The example demonstrates how to use to fetch a list of user refs associated with a specific role. The resulting list
contains Ref<User>
objects, which can be used to access the user entities later, or use the identity to perform
further operations.
- ORM
-
List<Ref<User>> users = ...; List<Role> roles = ORM(dataSource).entity(UserRole.class) .select(Role.class) .distinct() .whereRef(UserRole_.user, users) .getResultList();
- SQL Template
-
List<Ref<User>> users = ...; List<Role> roles = ORM(dataSource).query(RAW.""" SELECT DISTINCT \{Role.class} FROM \{UserRole.class} WHERE \{users}""") .getResultList(Role.class);
- Hybrid
-
List<Ref<User>> users = ...; List<Role> users = ORM(dataSource).entity(UserRole.class) .select(Role.class) .distinct() .where(RAW."\{users}") .getResultList();
The example demonstrates how to use the where
method to filter results based on a list of user refs. The resulting
list contains distinct Role
objects associated with the specified user refs.
The GroupedByCity
record can also be used to capture the city ref and the count of users in that city:
record GroupedByCity(Ref<City> city, long count) {}
The following example demonstrates how to select the primary key of the City
entity using SelectMode.PK
and map it
directly to a Ref<City>
within the GroupedByCity
record. The results are then collected into a map, where the key is
the Ref<City>
and the value is the count of users in that city. This map can be used to efficiently access the count
of users for each city without loading the entire entity graph.
- ORM
-
Map<Ref<City>, Long> counts = ORM(dataSource).entity(User.class) .select(GroupedByCity.class, RAW."\{select(City.class, SelectMode.PK)}, COUNT(*)") .groupBy(User_.city) .getResultList().stream() .collect(toMap(GroupedByCity::city, GroupedByCity::count));
- SQL Template
-
Map<Ref<City>, Long> counts = ORM(dataSource).query(RAW.""" SELECT \{select(City.class, SelectMode.PK)}, COUNT(*) FROM \{User.class} GROUP BY \{User_.city}""") .getResultList(GroupedByCity.class).stream() .collect(toMap(GroupedByCity::city, GroupedByCity::count));
- Hybrid
-
Map<Ref<City>, Long> counts = ORM(dataSource).entity(User.class) .select(GroupedByCity.class, RAW."\{select(City.class, SelectMode.PK)}, COUNT(*)") .groupBy(RAW."\{User_.city}") .getResultList().stream() .collect(toMap(GroupedByCity::city, GroupedByCity::count));
Storm works directly with the underlying database platform, being JPA, JDBC Connections or JDBC Data Sources. It does not provide its own transaction management. Instead, it relies on the transaction management capabilities of the underlying database platform. This means that you can use Storm in conjunction with your existing transaction management mechanism, whether it’s JPA or JDBC.
When Data Sources are used in a Spring application, the transaction management is handled by Spring. You can use the
@Transactional
annotation to manage transactions in your Spring application. Storm will then automatically participate
in the Spring-managed transactions.
Storm’s sessionless design means that it does not maintain any internal state or session. Each operation is stateless and independent, allowing for a clean and efficient interaction with the database. This design choice simplifies the programming model and reduces the complexity associated with managing transactions.
Note: Spring’s transaction management also works without the storm-spring
dependency, as this dependency is only
needed for repository injection.
Storm supports batch processing, allowing you to execute multiple database operations in a single batch. This can significantly improve performance when dealing with large datasets or multiple insert/update/delete operations. Batch processing is particularly useful when you need to perform bulk operations, such as inserting or updating a large number of records.
To use batch processing, you can use the out-of-the-box insert
, update
, and delete
methods provided by the
EntityRepository
interface. These methods can be used to perform batch operations on entities. The batch size can be
configured to control the number of operations executed in a single batch.
Storm supports streaming, allowing you to process large datasets efficiently without loading them entirely into memory. This is particularly useful when dealing with large result sets or when you need to process data in a memory-efficient manner. Streaming allows you to retrieve and process records one at a time, reducing memory consumption and improving performance.
The out-of-the-box methods of the repository return a stream of results for methods that query the entire table. The
QueryBuilder
interface also provides a getResultStream
method that returns a stream of results for the specified
query can be used as a swap-in for the getResultList
method.
Note: Streams must be closed after use to release any resources associated with them. This can be done using the
try-with-resources
statement or by explicitly closing the stream in a finally
block.
The following example demonstrates how to use streaming to process a large dataset without loading it entirely into memory:
- ORM
-
try (Stream<User> users = userRepository.selectAll()) { List<Integer> userIds = users.map(User::id).toList(); ... }
The example uses the select
method to retrieve a stream of User
records. The stream is then processed using
Java’s stream API to extract the user IDs. The try-with-resources
statement ensures that the stream is closed
automatically after use. This approach allows you to convert the stream to a list of user IDs without loading all
User
records into memory at once.
Storm supports upsert processing, allowing you to insert or update records in a single operation. This is particularly useful when you need to ensure that a record exists in the database, and if it does not, it should be inserted. If it already exists, it should be updated. This can help reduce the number of database operations and improve performance. It also allows you to let the database handle the logic of determining whether to insert or update a record.
To use upsert processing, you can use the upsert
method provided by the EntityRepository
interface. This method
can be used to perform upsert operations on entities. The upsert method will automatically determine whether to insert
or update the record based on its existence in the database.
The following example demonstrates how to use upsert processing to insert or update a user record in the database:
- ORM
-
City city = ...; User user = userRepository.upsertAndFetch(User.builder() .email("colin@acme.com") .birthDate(LocalDate.of(2019, 1, 28)) .street("243 Acalanes Dr.") .postalCode("94086") .city(city) .build() );
The example uses Lombok’s @Builder
annotation to create a new User
object for readability. The upsert logic is
invoked by passing an object without a primary key. The upsertAndFetch
method will automatically determine whether
to insert or update the record. The resulting User
object will contain the values read from the database, including
the primary key. An alternative upsert
method is also available to perform the operation without fetching the record
from the database.
Note: Upsert logic is implemented using the underlying database platform’s capabilities. This means that the correct database dialect must be provided to support upsert operations. Storm supports various database dialects, including Oracle, MySQL, PostgreSQL, and MS SQL Server.
Storm supports various database dialects, including Oracle, MySQL, PostgreSQL, and MS SQL Server. Include the appropriate dependency for your database to fully utilize the capabilities of the underlying database system, in a platform-independent manner. To use Storm with Oracle, include the following dependency:
- Maven
-
<dependency> <groupId>st.orm</groupId> <artifactId>storm-oracle</artifactId> <version>1.3.2</version> <scope>runtime</scope> </dependency>
- Gradle
-
runtimeOnly 'st.orm:storm-oracle:1.3.2'
Replace storm-oracle
with storm-mysql
, storm-mariadb
, storm-postgresql
, or storm-mssqlserver
to use Storm with
the respective database system.
The static metamodel feature provides type-safe access to entity attributes at compile time, reducing the risk of runtime errors. To generate a static metamodel for your entities, include the following dependency:
- Maven
-
<dependency> <groupId>st.orm</groupId> <artifactId>storm-metamodel-processor</artifactId> <version>1.3.2</version> <scope>provided</scope> </dependency>
- Gradle
-
annotationProcessor 'st.orm:storm-metamodel-processor:1.3.2'
The metamodel is used to access attributes in the entity in a type-safe manner. For example, to access the email
attribute of the User
entity, use the User_.email
field:
- ORM
-
String email = ...; List<User> users = userRepository .select() .where(User_.email, EQUALS, email) .getResultList();
- Hybrid
-
List<User> users = userRepository .select() .where(RAW."\{User_.email} = \{email}") .getResultList();
The metamodel can be used to access attributes of the entire entity graph. The example below demonstrates how to specify the city name of the city associated with the user:
- ORM
-
List<User> users = userRepository .select() .where(User_.city.name, EQUALS, "Sunnyvale") .getResultList();
- Hybrid
-
List<User> users = userRepository .select() .where(RAW."\{User_.city.name} = \{"Sunnyvale"}") .getResultList();
JSON is supported as a first-class citizen. Include the following dependency to enable JSON support:
- Maven
-
<dependency> <groupId>st.orm</groupId> <artifactId>storm-json</artifactId> <version>1.3.2</version> <scope>compile</scope> </dependency>
- Gradle
-
implementation 'st.orm:storm-json:1.3.2'
The following example demonstrates how to combine a regular query with a many-to-many relationship using JSON aggregation. It shows how JSON can efficiently aggregate related entities into a single query, avoiding multiple database calls.
The example defines a simple entity Role
and a record RolesByUser
to represent query results. The getUserRoles
method in the UserRepository
interface illustrates how to fetch users along with their associated roles as JSON
objects, utilizing a combination of joins and JSON aggregation:
- ORM
-
interface UserRepository extends EntityRepository<User> { record RolesByUser(User user, @Json List<Role> roles) {} default List<RolesByUser> getUserRoles() { return select(RolesByUser.class, RAW."\{User.class}, JSON_OBJECTAGG(\{Role.class})") .innerJoin(UserRole.class).on(User.class) .groupBy(User_.id) .getResultList(); } }
Note: This approach is suitable for mappings with a moderate size. For larger datasets or extensive mappings, it’s advisable to split queries into two separate parts: one to retrieve the main entities, and another to fetch their related entities. This strategy can help maintain optimal performance and manageability.
public record User(@PK Integer id,
String email,
LocalDate birthDate,
@Json Map<String, String> address
) implements Entity<Integer> {}
Another way to use JSON is to have a database column with JSON content and map it to a Java Map. In the following
example the JSON address field is automatically converted to a map with the keys 'street', 'postalCode' and 'city' given
that the address column contains data in the following format: { "street": "243 Acalanes Dr.", "postalCode": "94086", """city": "Sunnyvale" }
- ORM
-
public interface UserRepository extends EntityRepository<User> { // Nothing to do here. The Json annotation takes care of the conversion. // Select, Insert, Update, Delete and Upsert methods are inherited from EntityRepository. }
Spring Framework integration is straightforward. Include the following dependency to tie Storm into your Spring (Boot) application:
- Maven
-
<dependency> <groupId>st.orm</groupId> <artifactId>storm-spring</artifactId> <version>1.3.2</version> <scope>compile</scope> </dependency>
- Gradle
-
implementation 'st.orm:storm-spring:1.3.2'
The following example demonstrates how to configure the ORMTemplate
bean using a DataSource
.
- Spring
-
@Configuration public class ORMTemplateConfiguration { private final DataSource dataSource; public ORMTemplateConfiguration(DataSource dataSource) { this.dataSource = dataSource; } @Bean public ORMTemplate ormTemplate() { return PreparedStatementTemplate.of(dataSource).toORM(); } }
The repositories can be made available for dependency injection by extending the RepositoryBeanFactoryPostProcessor
class.
- Spring
-
@Configuration public class AcmeRepositoryBeanFactoryPostProcessor extends RepositoryBeanFactoryPostProcessor { @Override public String[] getRepositoryBasePackages() { // Your repository package(s) go here. return new String[] { "com.acme.repository" }; } }