Release v0.3.0 with soft deletes, timestamps, batch ops, pagination, transactions

New features:
- #[soft_delete] attribute with delete/restore/hard_delete methods
- #[created_at] auto-set on insert (milliseconds timestamp)
- #[updated_at] auto-set on every update (milliseconds timestamp)
- insert_many(&pool, &[entities]) for batch inserts
- upsert(&pool) / insert_or_update(&pool) for ON CONFLICT handling
- Page<T> struct with paginate() method for pagination
- find_partial() for selecting specific columns
- transaction! macro for ergonomic transaction handling
- PageRequest struct with offset/limit helpers

Technical changes:
- Added pagination.rs and transaction.rs modules
- Extended EntityField with is_soft_delete, is_created_at, is_updated_at
- Added generate_soft_delete_impl for delete/restore/hard_delete methods
- Upsert uses ON DUPLICATE KEY UPDATE (MySQL), ON CONFLICT DO UPDATE (Postgres/SQLite)
- Index hints supported in pagination and find_partial (MySQL)

All three database backends (MySQL, PostgreSQL, SQLite) tested and working.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Michael Netshipise 2026-01-28 16:36:24 +02:00
parent b1052ac271
commit 7e7815eee6
10 changed files with 588 additions and 23 deletions

View File

@ -17,8 +17,11 @@ sqlx-record/
│ ├── lib.rs # Public API exports, prelude, lookup macros, new_uuid
│ ├── models.rs # EntityChange struct, Action enum
│ ├── repositories.rs # Database query functions for entity changes
│ ├── value.rs # Type-safe Value enum, bind functions
│ ├── value.rs # Type-safe Value enum, UpdateExpr, bind functions
│ ├── filter.rs # Filter enum for query conditions
│ ├── conn_provider.rs # ConnProvider for flexible connection management
│ ├── pagination.rs # Page<T> and PageRequest structs
│ ├── transaction.rs # transaction! macro
│ └── helpers.rs # Utility macros
├── sqlx-record-derive/ # Procedural macro crate
│ └── src/
@ -206,6 +209,15 @@ struct User {
#[field_type("BIGINT")] // SQLx type hint
count: i64,
#[soft_delete] // Enables delete/restore/hard_delete methods
is_deleted: bool,
#[created_at] // Auto-set on insert (milliseconds)
created_at: i64,
#[updated_at] // Auto-set on update (milliseconds)
updated_at: i64,
}
```
@ -213,6 +225,9 @@ struct User {
**Insert:**
- `insert(&pool) -> Result<PkType, Error>`
- `insert_many(&pool, &[entities]) -> Result<Vec<PkType>, Error>` - Batch insert
- `upsert(&pool) -> Result<PkType, Error>` - Insert or update on PK conflict
- `insert_or_update(&pool) -> Result<PkType, Error>` - Alias for upsert
**Get:**
- `get_by_{pk}(&pool, &pk) -> Result<Option<Self>, Error>`
@ -225,6 +240,8 @@ struct User {
- `find_ordered(&pool, filters, index, order_by) -> Result<Vec<Self>, Error>`
- `find_ordered_with_limit(&pool, filters, index, order_by, offset_limit) -> Result<Vec<Self>, Error>`
- `count(&pool, filters, index) -> Result<u64, Error>`
- `paginate(&pool, filters, index, order_by, page_request) -> Result<Page<Self>, Error>`
- `find_partial(&pool, &[fields], filters, index) -> Result<Vec<Row>, Error>` - Select specific columns
**Update:**
- `update(&self, &pool, form) -> Result<(), Error>`
@ -252,6 +269,51 @@ struct User {
- `get_version(&pool, &pk) -> Result<Option<VersionType>, Error>`
- `get_versions(&pool, &[pk]) -> Result<HashMap<PkType, VersionType>, Error>`
**Soft Delete (if #[soft_delete] field exists):**
- `delete(&pool) -> Result<(), Error>` - Sets soft_delete to true
- `delete_by_{pk}(&pool, &pk) -> Result<(), Error>`
- `hard_delete(&pool) -> Result<(), Error>` - Permanently removes row
- `hard_delete_by_{pk}(&pool, &pk) -> Result<(), Error>`
- `restore(&pool) -> Result<(), Error>` - Sets soft_delete to false
- `restore_by_{pk}(&pool, &pk) -> Result<(), Error>`
- `soft_delete_field() -> &'static str` - Returns field name
## Pagination
```rust
use sqlx_record::prelude::*;
// Create page request (1-indexed pages)
let page_request = PageRequest::new(1, 20); // page 1, 20 items
// Get paginated results
let page = User::paginate(&pool, filters![], None, vec![("name", true)], page_request).await?;
// Page<T> properties
page.items // Vec<T> - items for this page
page.total_count // u64 - total matching records
page.page // u32 - current page (1-indexed)
page.page_size // u32 - items per page
page.total_pages() // u32 - calculated total pages
page.has_next() // bool
page.has_prev() // bool
page.is_empty() // bool
page.len() // usize - items on this page
```
## Transaction Helper
```rust
use sqlx_record::transaction;
// Automatically commits on success, rolls back on error
let result = transaction!(&pool, |tx| {
user.insert(&mut *tx).await?;
order.insert(&mut *tx).await?;
Ok::<_, sqlx::Error>(order.id)
}).await?;
```
## Filter API
```rust

View File

@ -1,6 +1,6 @@
[package]
name = "sqlx-record"
version = "0.2.0"
version = "0.3.0"
edition = "2021"
description = "Entity CRUD and change tracking for SQL databases with SQLx"

View File

@ -1,6 +1,6 @@
[package]
name = "sqlx-record-mcp"
version = "0.2.0"
version = "0.3.0"
edition = "2021"
description = "MCP server providing sqlx-record documentation and code generation"

View File

@ -34,17 +34,22 @@ struct JsonRpcError {
// Documentation Content
// ============================================================================
const OVERVIEW: &str = r#"# sqlx-record v0.2.0
const OVERVIEW: &str = r#"# sqlx-record v0.3.0
A Rust library providing derive macros for automatic CRUD operations and comprehensive audit trails for SQL entities. Supports MySQL, PostgreSQL, and SQLite via SQLx.
## Features
- **Derive Macros**: `#[derive(Entity)]` generates 40+ methods for CRUD operations
- **Derive Macros**: `#[derive(Entity)]` generates 50+ methods for CRUD operations
- **Multi-Database**: MySQL, PostgreSQL, SQLite with unified API
- **Audit Trails**: Track who changed what, when, and why
- **Type-Safe Filters**: Composable query building with `Filter` enum
- **UpdateExpr**: Advanced updates with arithmetic, CASE/WHEN, conditionals
- **Soft Deletes**: `#[soft_delete]` with delete/restore/hard_delete methods
- **Auto Timestamps**: `#[created_at]`, `#[updated_at]` auto-populated
- **Batch Operations**: `insert_many()`, `upsert()` for efficient bulk operations
- **Pagination**: `Page<T>` with `paginate()` method
- **Transaction Helper**: `transaction!` macro for ergonomic transactions
- **Lookup Tables**: Macros for code/enum generation
- **ConnProvider**: Flexible connection management (borrowed or pooled)
- **Time-Ordered UUIDs**: Better database indexing

View File

@ -1,6 +1,6 @@
[package]
name = "sqlx-record-ctl"
version = "0.2.0"
version = "0.3.0"
edition = "2021"
description = "CLI tool for managing sqlx-record audit tables"

View File

@ -1,6 +1,6 @@
[package]
name = "sqlx-record-derive"
version = "0.2.0"
version = "0.3.0"
edition = "2021"
description = "Derive macros for sqlx-record"

View File

@ -17,6 +17,9 @@ struct EntityField {
type_override: Option<String>,
is_primary_key: bool,
is_version_field: bool,
is_soft_delete: bool,
is_created_at: bool,
is_updated_at: bool,
}
/// Parse a string attribute that can be either:
@ -46,7 +49,7 @@ pub fn derive_update(input: TokenStream) -> TokenStream {
derive_entity_internal(input)
}
#[proc_macro_derive(Entity, attributes(rename, table_name, primary_key, version, field_type))]
#[proc_macro_derive(Entity, attributes(rename, table_name, primary_key, version, field_type, soft_delete, created_at, updated_at))]
pub fn derive_entity(input: TokenStream) -> TokenStream {
derive_entity_internal(input)
}
@ -117,17 +120,30 @@ fn derive_entity_internal(input: TokenStream) -> TokenStream {
.or_else(|| fields.iter().find(|f| f.ident == "id" || f.ident == "code"))
.expect("Struct must have a primary key field, either explicitly specified or named 'id' or 'code'");
let (has_created_at, has_updated_at) = check_timestamp_fields(&fields);
// Check for timestamp fields - either by attribute or by name
let has_created_at = fields.iter().any(|f| f.is_created_at) ||
fields.iter().any(|f| f.ident == "created_at" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("i64")));
let has_updated_at = fields.iter().any(|f| f.is_updated_at) ||
fields.iter().any(|f| f.ident == "updated_at" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("i64")));
let version_field = fields.iter()
.find(|f| f.is_version_field)
.or_else(|| fields.iter().find(|&f| is_version_field(f)));
// Find soft delete field (by attribute or by name convention)
let soft_delete_field = fields.iter()
.find(|f| f.is_soft_delete)
.or_else(|| fields.iter().find(|f| {
(f.ident == "is_deleted" || f.ident == "deleted") &&
matches!(&f.ty, Type::Path(p) if p.path.is_ident("bool"))
}));
// Generate all implementations
let insert_impl = generate_insert_impl(&name, &table_name, primary_key, &fields, has_created_at, has_updated_at, &impl_generics, &ty_generics, &where_clause);
let get_impl = generate_get_impl(&name, &table_name, primary_key, version_field, &fields, &impl_generics, &ty_generics, &where_clause);
let update_impl = generate_update_impl(&name, &update_form_name, &table_name, &fields, primary_key, version_field, &impl_generics, &ty_generics, &where_clause);
let insert_impl = generate_insert_impl(&name, &table_name, primary_key, &fields, has_created_at, has_updated_at, &impl_generics, &ty_generics, &where_clause);
let get_impl = generate_get_impl(&name, &table_name, primary_key, version_field, soft_delete_field, &fields, &impl_generics, &ty_generics, &where_clause);
let update_impl = generate_update_impl(&name, &update_form_name, &table_name, &fields, primary_key, version_field, has_updated_at, &impl_generics, &ty_generics, &where_clause);
let diff_impl = generate_diff_impl(&name, &update_form_name, &fields, primary_key, version_field, &impl_generics, &ty_generics, &where_clause);
let soft_delete_impl = generate_soft_delete_impl(&name, &table_name, primary_key, soft_delete_field, &impl_generics, &ty_generics, &where_clause);
let pk_type = &primary_key.ty;
let pk_field_name = &primary_key.ident;
@ -137,6 +153,7 @@ fn derive_entity_internal(input: TokenStream) -> TokenStream {
#get_impl
#update_impl
#diff_impl
#soft_delete_impl
impl #impl_generics #name #ty_generics #where_clause {
pub const fn table_name() -> &'static str {
@ -200,6 +217,12 @@ fn parse_fields(input: &DeriveInput) -> Vec<EntityField> {
.any(|attr| attr.path().is_ident("primary_key"));
let is_version_field = field.attrs.iter()
.any(|attr| attr.path().is_ident("version"));
let is_soft_delete = field.attrs.iter()
.any(|attr| attr.path().is_ident("soft_delete"));
let is_created_at = field.attrs.iter()
.any(|attr| attr.path().is_ident("created_at"));
let is_updated_at = field.attrs.iter()
.any(|attr| attr.path().is_ident("updated_at"));
EntityField {
ident,
@ -209,6 +232,9 @@ fn parse_fields(input: &DeriveInput) -> Vec<EntityField> {
type_override,
is_primary_key,
is_version_field,
is_soft_delete,
is_created_at,
is_updated_at,
}
}).collect()
}
@ -216,16 +242,6 @@ fn parse_fields(input: &DeriveInput) -> Vec<EntityField> {
}
}
fn check_timestamp_fields(fields: &[EntityField]) -> (bool, bool) {
let has_created_at = fields.iter()
.any(|f| f.ident == "created_at" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("i64")));
let has_updated_at = fields.iter()
.any(|f| f.ident == "updated_at" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("i64")));
(has_created_at, has_updated_at)
}
fn is_version_field(f: &EntityField) -> bool {
f.ident == "version" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("u64") ||
p.path.is_ident("u32") || p.path.is_ident("i64") || p.path.is_ident("i32"))
@ -244,8 +260,10 @@ fn generate_insert_impl(
where_clause: &Option<&WhereClause>,
) -> TokenStream2 {
let db_names: Vec<_> = fields.iter().map(|f| &f.db_name).collect();
let field_idents: Vec<_> = fields.iter().map(|f| &f.ident).collect();
let tq = table_quote();
let db = db_type();
let pk_db_name = &primary_key.db_name;
let bindings: Vec<_> = fields.iter().map(|f| {
let ident = &f.ident;
@ -282,6 +300,127 @@ fn generate_insert_impl(
Ok(self.#pk_field.clone())
}
/// Insert multiple entities in a single statement
pub async fn insert_many<'a, E>(executor: E, entities: &[Self]) -> Result<Vec<#pk_type>, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db>,
Self: Clone,
{
if entities.is_empty() {
return Ok(vec![]);
}
let field_count = #field_count;
let mut placeholders = Vec::with_capacity(entities.len());
let mut current_idx = 1usize;
for _ in entities {
let row_placeholders: String = (0..field_count)
.map(|_| {
let ph = ::sqlx_record::prelude::placeholder(current_idx);
current_idx += 1;
ph
})
.collect::<Vec<_>>()
.join(", ");
placeholders.push(format!("({})", row_placeholders));
}
let insert_stmt = format!(
"INSERT INTO {}{}{} ({}) VALUES {}",
#tq, #table_name, #tq,
vec![#(#db_names),*].join(", "),
placeholders.join(", ")
);
let mut query = sqlx::query(&insert_stmt);
for entity in entities {
#(query = query.bind(&entity.#field_idents);)*
}
query.execute(executor).await?;
Ok(entities.iter().map(|e| e.#pk_field.clone()).collect())
}
/// Insert or update on primary key conflict (upsert)
pub async fn upsert<'a, E>(&self, executor: E) -> Result<#pk_type, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db>,
{
let placeholders: String = (1..=#field_count)
.map(|i| ::sqlx_record::prelude::placeholder(i))
.collect::<Vec<_>>()
.join(", ");
let non_pk_fields: Vec<&str> = vec![#(#db_names),*]
.into_iter()
.filter(|f| *f != #pk_db_name)
.collect();
#[cfg(feature = "mysql")]
let upsert_stmt = {
let update_clause = non_pk_fields.iter()
.map(|f| format!("{} = VALUES({})", f, f))
.collect::<Vec<_>>()
.join(", ");
format!(
"INSERT INTO {}{}{} ({}) VALUES ({}) ON DUPLICATE KEY UPDATE {}",
#tq, #table_name, #tq,
vec![#(#db_names),*].join(", "),
placeholders,
update_clause
)
};
#[cfg(feature = "postgres")]
let upsert_stmt = {
let update_clause = non_pk_fields.iter()
.map(|f| format!("{} = EXCLUDED.{}", f, f))
.collect::<Vec<_>>()
.join(", ");
format!(
"INSERT INTO {}{}{} ({}) VALUES ({}) ON CONFLICT ({}) DO UPDATE SET {}",
#tq, #table_name, #tq,
vec![#(#db_names),*].join(", "),
placeholders,
#pk_db_name,
update_clause
)
};
#[cfg(feature = "sqlite")]
let upsert_stmt = {
let update_clause = non_pk_fields.iter()
.map(|f| format!("{} = excluded.{}", f, f))
.collect::<Vec<_>>()
.join(", ");
format!(
"INSERT INTO {}{}{} ({}) VALUES ({}) ON CONFLICT({}) DO UPDATE SET {}",
#tq, #table_name, #tq,
vec![#(#db_names),*].join(", "),
placeholders,
#pk_db_name,
update_clause
)
};
sqlx::query(&upsert_stmt)
#(.bind(#bindings))*
.execute(executor)
.await?;
Ok(self.#pk_field.clone())
}
/// Alias for upsert
pub async fn insert_or_update<'a, E>(&self, executor: E) -> Result<#pk_type, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db>,
{
self.upsert(executor).await
}
}
}
}
@ -315,6 +454,7 @@ fn generate_get_impl(
table_name: &str,
primary_key: &EntityField,
version_field: Option<&EntityField>,
_soft_delete_field: Option<&EntityField>, // Reserved for future auto-filtering
fields: &[EntityField],
impl_generics: &ImplGenerics,
ty_generics: &TypeGenerics,
@ -725,6 +865,90 @@ fn generate_get_impl(
Ok(count)
}
/// Paginate results with total count
pub async fn paginate<'a, E>(
executor: E,
filters: Vec<::sqlx_record::prelude::Filter<'a>>,
index: Option<&str>,
order_by: Vec<(&str, bool)>,
page_request: ::sqlx_record::prelude::PageRequest,
) -> Result<::sqlx_record::prelude::Page<Self>, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db> + Copy,
{
// Get total count first
let total_count = Self::count(executor, filters.clone(), index).await?;
// Get page items
let items = Self::find_ordered_with_limit(
executor,
filters,
index,
order_by,
Some((page_request.offset(), page_request.limit())),
).await?;
Ok(::sqlx_record::prelude::Page::new(
items,
total_count,
page_request.page,
page_request.page_size,
))
}
/// Select specific fields only (returns raw rows)
/// Use `sqlx::Row` trait to access fields: `row.try_get::<String, _>("name")?`
pub async fn find_partial<'a, E>(
executor: E,
select_fields: &[&str],
filters: Vec<::sqlx_record::prelude::Filter<'a>>,
index: Option<&str>,
) -> Result<Vec<<#db as sqlx::Database>::Row>, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db>,
{
use ::sqlx_record::prelude::{Filter, bind_values};
// Validate fields exist
let valid_fields: ::std::collections::HashSet<_> = Self::select_fields().into_iter().collect();
let selected: Vec<_> = select_fields.iter()
.filter(|f| valid_fields.contains(*f))
.copied()
.collect();
if selected.is_empty() {
return Ok(vec![]);
}
let (where_conditions, values) = Filter::build_where_clause(&filters);
let where_clause = if !where_conditions.is_empty() {
format!("WHERE {}", where_conditions)
} else {
String::new()
};
// Index hints are MySQL-specific
#[cfg(feature = "mysql")]
let index_clause = index
.map(|idx| format!("USE INDEX ({})", idx))
.unwrap_or_default();
#[cfg(not(feature = "mysql"))]
let index_clause = { let _ = index; String::new() };
let query = format!(
"SELECT DISTINCT {} FROM {}{}{} {} {}",
selected.join(", "),
#tq, #table_name, #tq,
index_clause,
where_clause,
);
let db_query = sqlx::query(&query);
bind_values(db_query, &values)
.fetch_all(executor)
.await
}
}
}
}
@ -745,6 +969,7 @@ fn generate_update_impl(
fields: &[EntityField],
primary_key: &EntityField,
version_field: Option<&EntityField>,
has_updated_at: bool,
impl_generics: &ImplGenerics,
ty_generics: &TypeGenerics,
where_clause: &Option<&WhereClause>,
@ -833,6 +1058,17 @@ fn generate_update_impl(
quote! {}
};
// Auto-update updated_at timestamp
let updated_at_increment = if has_updated_at {
quote! {
parts.push(format!("updated_at = {}", ::sqlx_record::prelude::placeholder(idx)));
values.push(::sqlx_record::prelude::Value::Int64(chrono::Utc::now().timestamp_millis()));
idx += 1;
}
} else {
quote! {}
};
quote! {
/// Update form with support for simple value updates and complex expressions
pub struct #update_form_name #ty_generics #where_clause {
@ -909,6 +1145,7 @@ fn generate_update_impl(
)*
#version_increment
#updated_at_increment
(parts.join(", "), values)
}
@ -1222,3 +1459,107 @@ fn generate_diff_impl(
}
}
}
// Generate soft delete implementation
fn generate_soft_delete_impl(
name: &Ident,
table_name: &str,
primary_key: &EntityField,
soft_delete_field: Option<&EntityField>,
impl_generics: &ImplGenerics,
ty_generics: &TypeGenerics,
where_clause: &Option<&WhereClause>,
) -> TokenStream2 {
let Some(sd_field) = soft_delete_field else {
return quote! {};
};
let pk_field = &primary_key.ident;
let pk_type = &primary_key.ty;
let pk_db_name = &primary_key.db_name;
let sd_db_name = &sd_field.db_name;
let db = db_type();
let tq = table_quote();
let pk_field_name = primary_key.ident.to_string();
let delete_by_func = format_ident!("delete_by_{}", pk_field_name);
let hard_delete_by_func = format_ident!("hard_delete_by_{}", pk_field_name);
let restore_by_func = format_ident!("restore_by_{}", pk_field_name);
quote! {
impl #impl_generics #name #ty_generics #where_clause {
/// Soft delete - sets the soft_delete field to true
pub async fn delete<'a, E>(&self, executor: E) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
Self::#delete_by_func(executor, &self.#pk_field).await
}
/// Soft delete by primary key
pub async fn #delete_by_func<'a, E>(executor: E, #pk_field: &#pk_type) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
let query = format!(
"UPDATE {}{}{} SET {} = TRUE WHERE {} = {}",
#tq, #table_name, #tq,
#sd_db_name,
#pk_db_name, ::sqlx_record::prelude::placeholder(1)
);
sqlx::query(&query).bind(#pk_field).execute(executor).await?;
Ok(())
}
/// Hard delete - permanently removes the row from database
pub async fn hard_delete<'a, E>(&self, executor: E) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
Self::#hard_delete_by_func(executor, &self.#pk_field).await
}
/// Hard delete by primary key
pub async fn #hard_delete_by_func<'a, E>(executor: E, #pk_field: &#pk_type) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
let query = format!(
"DELETE FROM {}{}{} WHERE {} = {}",
#tq, #table_name, #tq,
#pk_db_name, ::sqlx_record::prelude::placeholder(1)
);
sqlx::query(&query).bind(#pk_field).execute(executor).await?;
Ok(())
}
/// Restore a soft-deleted record
pub async fn restore<'a, E>(&self, executor: E) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
Self::#restore_by_func(executor, &self.#pk_field).await
}
/// Restore by primary key
pub async fn #restore_by_func<'a, E>(executor: E, #pk_field: &#pk_type) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
let query = format!(
"UPDATE {}{}{} SET {} = FALSE WHERE {} = {}",
#tq, #table_name, #tq,
#sd_db_name,
#pk_db_name, ::sqlx_record::prelude::placeholder(1)
);
sqlx::query(&query).bind(#pk_field).execute(executor).await?;
Ok(())
}
/// Get the soft delete field name
pub const fn soft_delete_field() -> &'static str {
#sd_db_name
}
}
}
}

View File

@ -8,6 +8,11 @@ mod helpers;
mod value;
mod filter;
mod conn_provider;
mod pagination;
mod transaction;
pub use pagination::{Page, PageRequest};
// transaction! macro is exported via #[macro_export] in transaction.rs
// Re-export the sqlx_record_derive module on feature flag
#[cfg(feature = "derive")]
@ -174,7 +179,8 @@ pub mod prelude {
pub use crate::{filter_or, filter_and, filters, update_entity_func};
pub use crate::{filter_or as or, filter_and as and};
pub use crate::values;
pub use crate::{new_uuid, lookup_table, lookup_options};
pub use crate::{new_uuid, lookup_table, lookup_options, transaction};
pub use crate::pagination::{Page, PageRequest};
#[cfg(any(feature = "mysql", feature = "postgres", feature = "sqlite"))]
pub use crate::conn_provider::ConnProvider;

108
src/pagination.rs Normal file
View File

@ -0,0 +1,108 @@
/// Paginated result container
#[derive(Debug, Clone)]
pub struct Page<T> {
/// Items for this page
pub items: Vec<T>,
/// Total count of all matching items
pub total_count: u64,
/// Current page number (1-indexed)
pub page: u32,
/// Items per page
pub page_size: u32,
}
impl<T> Page<T> {
pub fn new(items: Vec<T>, total_count: u64, page: u32, page_size: u32) -> Self {
Self { items, total_count, page, page_size }
}
/// Total number of pages
pub fn total_pages(&self) -> u32 {
if self.page_size == 0 {
return 0;
}
((self.total_count as f64) / (self.page_size as f64)).ceil() as u32
}
/// Whether there is a next page
pub fn has_next(&self) -> bool {
self.page < self.total_pages()
}
/// Whether there is a previous page
pub fn has_prev(&self) -> bool {
self.page > 1
}
/// Check if page is empty
pub fn is_empty(&self) -> bool {
self.items.is_empty()
}
/// Number of items on this page
pub fn len(&self) -> usize {
self.items.len()
}
/// Map items to a different type
pub fn map<U, F: FnMut(T) -> U>(self, f: F) -> Page<U> {
Page {
items: self.items.into_iter().map(f).collect(),
total_count: self.total_count,
page: self.page,
page_size: self.page_size,
}
}
/// Iterator over items
pub fn iter(&self) -> impl Iterator<Item = &T> {
self.items.iter()
}
/// Take ownership of items
pub fn into_items(self) -> Vec<T> {
self.items
}
}
impl<T> IntoIterator for Page<T> {
type Item = T;
type IntoIter = std::vec::IntoIter<T>;
fn into_iter(self) -> Self::IntoIter {
self.items.into_iter()
}
}
/// Pagination request options
#[derive(Debug, Clone, Default)]
pub struct PageRequest {
/// Page number (1-indexed, minimum 1)
pub page: u32,
/// Items per page
pub page_size: u32,
}
impl PageRequest {
pub fn new(page: u32, page_size: u32) -> Self {
Self {
page: page.max(1),
page_size,
}
}
/// Calculate SQL OFFSET (0-indexed)
pub fn offset(&self) -> u32 {
if self.page <= 1 { 0 } else { (self.page - 1) * self.page_size }
}
/// Calculate SQL LIMIT
pub fn limit(&self) -> u32 {
self.page_size
}
/// First page
pub fn first(page_size: u32) -> Self {
Self::new(1, page_size)
}
}

43
src/transaction.rs Normal file
View File

@ -0,0 +1,43 @@
/// Transaction macro for ergonomic transaction handling.
///
/// Automatically commits on success, rolls back on error.
///
/// # Example
/// ```ignore
/// use sqlx_record::transaction;
///
/// let result = transaction!(&pool, |tx| {
/// user.insert(&mut *tx).await?;
/// order.insert(&mut *tx).await?;
/// Ok::<_, sqlx::Error>(order.id)
/// }).await?;
/// ```
#[macro_export]
macro_rules! transaction {
($pool:expr, |$tx:ident| $body:expr) => {{
async {
let mut $tx = $pool.begin().await?;
let result: Result<_, sqlx::Error> = async { $body }.await;
match result {
Ok(value) => {
$tx.commit().await?;
Ok(value)
}
Err(e) => {
// Rollback happens automatically on tx drop, but we can be explicit
let _ = $tx.rollback().await;
Err(e)
}
}
}
}};
}
#[cfg(test)]
mod tests {
#[test]
fn test_macro_compiles() {
// Just verify the macro syntax is valid
let _ = stringify!(transaction!(&pool, |tx| { Ok::<_, sqlx::Error>(()) }));
}
}