Compare commits

..

10 Commits
v0.2.0 ... main

Author SHA1 Message Date
Michael Netshipise 2ea8a63ae4 consolidated skills 2026-02-06 06:09:44 +02:00
Michael Netshipise 6ed2401be1 fix: use Value-based binding in UpdateForm for proper Option<T> handling
When UpdateForm wraps fields that are already Option<T>, it creates
nested Options (Option<Option<T>>). The old bind_form_values method
bound these directly as &Option<T>, which caused MySQL "malform packet"
errors for Uuid -> BINARY(16) conversions.

Now both bind_form_values and bind_all_values use update_stmt_with_values()
which properly converts values through the Value enum:
- Some(None) -> Value::Null
- Some(Some(v)) -> Value::T(v)

This preserves the three-state semantics:
- None: don't include field in UPDATE
- Some(None): SET column = NULL
- Some(Some(v)): SET column = value

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 18:13:07 +02:00
Michael Netshipise a1464d3f7c Add update_by_filter for bulk updates by filter conditions
Usage:
  User::update_by_filter(&pool, filters![("status", "pending")], form).await?;

- Requires at least one filter to prevent accidental table-wide updates
- Returns number of affected rows
- Binds form values first, then filter values

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 20:55:10 +02:00
Michael Netshipise 3815913821 Fix remaining issues from bug report
- Rename static-validation to static-check
- Fix upsert_stmt undefined when no database feature enabled
- Fix bind_all_values lifetime to use explicit 'q instead of '_

Decimal support requires enabling the 'decimal' feature flag.
Manual UpdateForm implementations need to add _exprs field.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:18:13 +02:00
Michael Netshipise 0913091b67 Fix static-validation to use correct database placeholders
- Add static_placeholder() function that uses $1 for postgres, ? for mysql/sqlite
- Restore static-validation feature with proper database-specific SQL
- cfg!(feature = "static-validation") now works correctly with query_as! macro

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:07:49 +02:00
Michael Netshipise 3c0ae1983f Fix MySQL placeholder issue and add missing Value types
- Remove broken static-validation feature (hardcoded $1 placeholders)
- Add Value::Null variant for Option<T> support
- Add From<Option<T>> impl for all Value types
- Add f32, f64, NaiveTime, serde_json::Value support
- Add optional decimal feature for rust_decimal::Decimal
- All database backends now use runtime placeholder() function

Fixes issues:
- MySQL getting PostgreSQL $1 placeholders
- Missing From<Option<T>> implementations
- Missing base types (Decimal, JsonValue, NaiveTime, floats)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:01:31 +02:00
Michael Netshipise ceeecf2e5c Bump version to 0.3.2
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:23:39 +02:00
Michael Netshipise 6c56231003 Use workspace version, add clap CLI to MCP server
- Define version in workspace.package, inherit in all crates
- Rename MCP binary from sqlx-record-expert to sqlx-record-mcp
- Add clap for --version and --help support

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:19:14 +02:00
Michael Netshipise 44ac78d67e Update docs, MCP server, and skills for v0.3.0 features
- Add skill docs for batch ops, pagination, soft delete, transactions
- Update sqlx-entity.md with new attributes and methods
- Update sqlx-record.md with quick reference for all features
- Update MCP server with new feature documentation resources
- Fix .gitignore paths for renamed directories

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 16:44:57 +02:00
Michael Netshipise f785bb1bf6 Release v0.3.0 with soft deletes, timestamps, batch ops, pagination, transactions
New features:
- #[soft_delete] attribute with delete/restore/hard_delete methods
- #[created_at] auto-set on insert (milliseconds timestamp)
- #[updated_at] auto-set on every update (milliseconds timestamp)
- insert_many(&pool, &[entities]) for batch inserts
- upsert(&pool) / insert_or_update(&pool) for ON CONFLICT handling
- Page<T> struct with paginate() method for pagination
- find_partial() for selecting specific columns
- transaction! macro for ergonomic transaction handling
- PageRequest struct with offset/limit helpers

Technical changes:
- Added pagination.rs and transaction.rs modules
- Extended EntityField with is_soft_delete, is_created_at, is_updated_at
- Added generate_soft_delete_impl for delete/restore/hard_delete methods
- Upsert uses ON DUPLICATE KEY UPDATE (MySQL), ON CONFLICT DO UPDATE (Postgres/SQLite)
- Index hints supported in pagination and find_partial (MySQL)

All three database backends (MySQL, PostgreSQL, SQLite) tested and working.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 16:36:24 +02:00
26 changed files with 2344 additions and 151 deletions

View File

@ -0,0 +1,196 @@
# sqlx-record Batch Operations Skill
Guide to insert_many() and upsert() for efficient bulk operations.
## Triggers
- "batch insert", "bulk insert"
- "insert many", "insert_many"
- "upsert", "insert or update"
- "on conflict", "on duplicate key"
## Overview
`sqlx-record` provides efficient batch operations:
- `insert_many()` - Insert multiple records in a single query
- `upsert()` - Insert or update on primary key conflict
## insert_many()
Insert multiple entities in a single SQL statement:
```rust
pub async fn insert_many(executor, entities: &[Self]) -> Result<Vec<PkType>, Error>
```
### Usage
```rust
use sqlx_record::prelude::*;
let users = vec![
User { id: new_uuid(), name: "Alice".into(), email: "alice@example.com".into() },
User { id: new_uuid(), name: "Bob".into(), email: "bob@example.com".into() },
User { id: new_uuid(), name: "Carol".into(), email: "carol@example.com".into() },
];
// Insert all in single query
let ids = User::insert_many(&pool, &users).await?;
println!("Inserted {} users", ids.len());
```
### SQL Generated
```sql
-- MySQL
INSERT INTO users (id, name, email) VALUES (?, ?, ?), (?, ?, ?), (?, ?, ?)
-- PostgreSQL
INSERT INTO users (id, name, email) VALUES ($1, $2, $3), ($4, $5, $6), ($7, $8, $9)
-- SQLite
INSERT INTO users (id, name, email) VALUES (?, ?, ?), (?, ?, ?), (?, ?, ?)
```
### Benefits
- Single round-trip to database
- Much faster than N individual inserts
- Atomic - all succeed or all fail
### Limitations
- Entity must implement `Clone` (for collecting PKs)
- Empty slice returns empty vec without database call
- Very large batches may hit database limits (split into chunks if needed)
### Chunked Insert
For very large datasets:
```rust
const BATCH_SIZE: usize = 1000;
async fn insert_large_dataset(pool: &Pool, users: Vec<User>) -> Result<Vec<Uuid>, sqlx::Error> {
let mut all_ids = Vec::with_capacity(users.len());
for chunk in users.chunks(BATCH_SIZE) {
let ids = User::insert_many(pool, chunk).await?;
all_ids.extend(ids);
}
Ok(all_ids)
}
```
## upsert() / insert_or_update()
Insert a new record, or update if primary key already exists:
```rust
pub async fn upsert(&self, executor) -> Result<PkType, Error>
pub async fn insert_or_update(&self, executor) -> Result<PkType, Error> // alias
```
### Usage
```rust
let user = User {
id: existing_or_new_id,
name: "Alice".into(),
email: "alice@example.com".into(),
};
// Insert if new, update if exists
user.upsert(&pool).await?;
// Or using alias
user.insert_or_update(&pool).await?;
```
### SQL Generated
```sql
-- MySQL
INSERT INTO users (id, name, email) VALUES (?, ?, ?)
ON DUPLICATE KEY UPDATE name = VALUES(name), email = VALUES(email)
-- PostgreSQL
INSERT INTO users (id, name, email) VALUES ($1, $2, $3)
ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, email = EXCLUDED.email
-- SQLite
INSERT INTO users (id, name, email) VALUES (?, ?, ?)
ON CONFLICT(id) DO UPDATE SET name = excluded.name, email = excluded.email
```
### Use Cases
1. **Sync external data**: Import data that may already exist
2. **Idempotent operations**: Safe to retry without duplicates
3. **Cache refresh**: Update cached records atomically
### Examples
#### Sync Products
```rust
async fn sync_products(pool: &Pool, external_products: Vec<ExternalProduct>) -> Result<(), sqlx::Error> {
for ext in external_products {
let product = Product {
id: ext.id, // Use external ID as PK
name: ext.name,
price: ext.price,
updated_at: chrono::Utc::now().timestamp_millis(),
};
product.upsert(pool).await?;
}
Ok(())
}
```
#### Idempotent Event Processing
```rust
async fn process_event(pool: &Pool, event: Event) -> Result<(), sqlx::Error> {
let record = ProcessedEvent {
id: event.id, // Event ID as PK - prevents duplicates
event_type: event.event_type,
payload: event.payload,
processed_at: chrono::Utc::now().timestamp_millis(),
};
// Safe to call multiple times - won't create duplicates
record.upsert(pool).await?;
Ok(())
}
```
#### With Transaction
```rust
use sqlx_record::transaction;
transaction!(&pool, |tx| {
// Upsert multiple records atomically
for item in items {
item.upsert(&mut *tx).await?;
}
Ok::<_, sqlx::Error>(())
}).await?;
```
## Comparison
| Operation | Behavior on Existing PK | SQL Efficiency |
|-----------|------------------------|----------------|
| `insert()` | Error (duplicate key) | Single row |
| `insert_many()` | Error (duplicate key) | Multiple rows, single query |
| `upsert()` | Updates all non-PK fields | Single row |
## Notes
- `upsert()` updates ALL non-PK fields, not just changed ones
- Primary key must be properly indexed (usually automatic)
- For partial updates, use `insert()` + `update_by_id()` with conflict check
- `insert_many()` requires all entities have unique PKs among themselves

View File

@ -6,12 +6,14 @@ Guide to flexible connection management.
- "connection provider", "conn provider" - "connection provider", "conn provider"
- "borrow connection", "pool connection" - "borrow connection", "pool connection"
- "lazy connection", "connection management" - "lazy connection", "connection management"
- "transaction provider", "use transaction"
## Overview ## Overview
`ConnProvider` enables flexible connection handling: `ConnProvider` enables flexible connection handling:
- **Borrowed**: Use an existing connection reference - **Borrowed**: Use an existing connection reference
- **Owned**: Lazily acquire from pool on first use - **Owned**: Lazily acquire from pool on first use
- **Transaction**: Use a transaction reference (all operations participate in the transaction)
## Enum Variants ## Enum Variants
@ -26,6 +28,10 @@ pub enum ConnProvider<'a> {
pool: Pool, pool: Pool,
conn: Option<PoolConnection<DB>>, conn: Option<PoolConnection<DB>>,
}, },
/// Reference to a transaction
Transaction {
tx: &'a mut Transaction<'static, DB>,
},
} }
``` ```
@ -45,15 +51,29 @@ let mut provider = ConnProvider::from_pool(pool.clone());
// Connection acquired on first get_conn() call // Connection acquired on first get_conn() call
``` ```
### from_tx
Use a transaction (all operations participate in the transaction):
```rust
let mut tx = pool.begin().await?;
let mut provider = ConnProvider::from_tx(&mut tx);
// All operations through provider use the transaction
do_work(&mut provider).await?;
// You must commit/rollback the transaction yourself
tx.commit().await?;
```
## Getting the Connection ## Getting the Connection
```rust ```rust
let conn = provider.get_conn().await?; let conn = provider.get_conn().await?;
// Returns &mut PoolConnection<DB> // Returns &mut <DB>Connection (e.g., &mut MySqlConnection)
``` ```
- **Borrowed**: Returns reference immediately - **Borrowed**: Returns underlying connection immediately
- **Owned**: Acquires on first call, returns same connection on subsequent calls - **Owned**: Acquires on first call, returns same connection on subsequent calls
- **Transaction**: Returns transaction's underlying connection
## Use Cases ## Use Cases
@ -105,15 +125,37 @@ let mut conn = pool.acquire().await?;
do_database_work(&mut ConnProvider::from_ref(&mut conn)).await?; do_database_work(&mut ConnProvider::from_ref(&mut conn)).await?;
// Call with pool // Call with pool
do_database_work(&mut ConnProvider::from_pool(pool)).await?; do_database_work(&mut ConnProvider::from_pool(pool.clone())).await?;
// Call with transaction
let mut tx = pool.begin().await?;
do_database_work(&mut ConnProvider::from_tx(&mut tx)).await?;
tx.commit().await?;
``` ```
### Transaction-like Patterns ### Using Transactions
```rust
async fn transactional_operation(pool: MySqlPool) -> Result<()> {
let mut tx = pool.begin().await?;
let mut provider = ConnProvider::from_tx(&mut tx);
// All operations participate in the transaction
step_1(&mut provider).await?;
step_2(&mut provider).await?;
step_3(&mut provider).await?;
// Commit (or rollback on error)
tx.commit().await?;
Ok(())
}
```
### Same Connection Pattern
```rust ```rust
async fn multi_step_operation(pool: MySqlPool) -> Result<()> { async fn multi_step_operation(pool: MySqlPool) -> Result<()> {
let mut provider = ConnProvider::from_pool(pool); let mut provider = ConnProvider::from_pool(pool);
// All operations use same connection // All operations use same connection (but no transaction)
step_1(&mut provider).await?; step_1(&mut provider).await?;
step_2(&mut provider).await?; step_2(&mut provider).await?;
step_3(&mut provider).await?; step_3(&mut provider).await?;
@ -127,11 +169,13 @@ async fn multi_step_operation(pool: MySqlPool) -> Result<()> {
The concrete types depend on the enabled feature: The concrete types depend on the enabled feature:
| Feature | Pool Type | Connection Type | | Feature | Pool Type | Connection Type | Transaction Type |
|---------|-----------|-----------------| |---------|-----------|-----------------|------------------|
| `mysql` | `MySqlPool` | `PoolConnection<MySql>` | | `mysql` | `MySqlPool` | `MySqlConnection` | `Transaction<'static, MySql>` |
| `postgres` | `PgPool` | `PoolConnection<Postgres>` | | `postgres` | `PgPool` | `PgConnection` | `Transaction<'static, Postgres>` |
| `sqlite` | `SqlitePool` | `PoolConnection<Sqlite>` | | `sqlite` | `SqlitePool` | `SqliteConnection` | `Transaction<'static, Sqlite>` |
Note: `get_conn()` returns `&mut <DB>Connection` (the underlying connection type).
## Example: Service Layer ## Example: Service Layer
@ -175,28 +219,28 @@ let user_id = UserService::create_with_profile(&mut provider, "Alice", "Hello!")
## Connection Lifecycle ## Connection Lifecycle
``` ```
from_pool(pool) from_ref(&mut conn) from_pool(pool) from_ref(&mut conn) from_tx(&mut tx)
│ │ │ │
▼ ▼ ▼ ▼
Owned { Borrowed { Owned { Borrowed { Transaction {
pool, conn: &mut PoolConnection pool, conn: &mut tx: &mut
conn: None } conn: None PoolConnection Transaction
} } } }
│ │ │ │
│ get_conn() │ get_conn() │ get_conn() │ get_conn() │ get_conn()
▼ ▼ ▼ ▼
pool.acquire() return conn pool.acquire() deref conn deref tx
│ │ │ │
▼ ▼
Owned { Owned { return &mut return &mut
pool, pool, Connection Connection
conn: Some(acquired) │ conn: Some(acquired) │
} │ } │
│ │ │ │
│ get_conn() (subsequent) │ │ get_conn() (subsequent) │
▼ ▼
return &mut acquired │ return &mut conn Drop: nothing Drop: nothing
(borrowed) (tx managed
externally)
Drop: conn returned Drop: nothing (borrowed) Drop: conn returned
``` ```

View File

@ -56,11 +56,47 @@ large_count: i64,
- SQLx type hint for compile-time validation - SQLx type hint for compile-time validation
- Adds type annotation in SELECT: `field as "field: TYPE"` - Adds type annotation in SELECT: `field as "field: TYPE"`
### #[soft_delete]
```rust
#[soft_delete]
is_active: bool,
```
- Enables soft delete functionality
- Generates `soft_delete()`, `soft_delete_by_{pk}()`, `restore()`, `restore_by_{pk}()` methods
- Field must be `bool` type
- Convention: `is_active` fields are auto-detected (FALSE = deleted)
- `#[soft_delete]` attribute means field is FALSE when entity is deleted
### #[created_at]
```rust
#[created_at]
created_at: i64,
```
- Auto-set to current timestamp (milliseconds) on insert
- Field must be `i64` type
- Excluded from UpdateForm
### #[updated_at]
```rust
#[updated_at]
updated_at: i64,
```
- Auto-set to current timestamp (milliseconds) on every update
- Field must be `i64` type
- Excluded from UpdateForm
## Generated Methods ## Generated Methods
### Insert ### Insert
```rust ```rust
pub async fn insert<E>(&self, executor: E) -> Result<PkType, sqlx::Error> pub async fn insert<E>(&self, executor: E) -> Result<PkType, sqlx::Error>
// Batch insert
pub async fn insert_many(executor, entities: &[Self]) -> Result<Vec<PkType>, Error>
// Insert or update on conflict
pub async fn upsert(&self, executor) -> Result<PkType, Error>
pub async fn insert_or_update(&self, executor) -> Result<PkType, Error> // alias
``` ```
### Get Methods ### Get Methods
@ -102,6 +138,23 @@ pub async fn find_ordered_with_limit(
// Count matching // Count matching
pub async fn count(executor, filters: Vec<Filter>, index: Option<&str>) -> Result<u64, Error> pub async fn count(executor, filters: Vec<Filter>, index: Option<&str>) -> Result<u64, Error>
// Paginated results
pub async fn paginate(
executor,
filters: Vec<Filter>,
index: Option<&str>,
order_by: Vec<(&str, bool)>,
page_request: PageRequest
) -> Result<Page<Self>, Error>
// Select specific columns only
pub async fn find_partial(
executor,
select_fields: &[&str],
filters: Vec<Filter>,
index: Option<&str>
) -> Result<Vec<Row>, Error>
``` ```
### Update Methods ### Update Methods
@ -143,6 +196,27 @@ pub async fn get_version(executor, pk: &PkType) -> Result<Option<VersionType>, E
pub async fn get_versions(executor, pks: &[PkType]) -> Result<HashMap<PkType, VersionType>, Error> pub async fn get_versions(executor, pks: &[PkType]) -> Result<HashMap<PkType, VersionType>, Error>
``` ```
### Hard Delete (always generated)
```rust
// Permanently removes row from database
pub async fn hard_delete(&self, executor) -> Result<(), Error>
pub async fn hard_delete_by_id(executor, id: &Uuid) -> Result<(), Error>
```
### Soft Delete Methods (if `is_active` field or `#[soft_delete]` exists)
```rust
// Soft delete - marks as deleted (is_active = FALSE)
pub async fn soft_delete(&self, executor) -> Result<(), Error>
pub async fn soft_delete_by_id(executor, id: &Uuid) -> Result<(), Error>
// Restore - marks as active (is_active = TRUE)
pub async fn restore(&self, executor) -> Result<(), Error>
pub async fn restore_by_id(executor, id: &Uuid) -> Result<(), Error>
// Get field name
pub const fn soft_delete_field() -> &'static str
```
### Metadata Methods ### Metadata Methods
```rust ```rust
pub const fn table_name() -> &'static str pub const fn table_name() -> &'static str

View File

@ -0,0 +1,164 @@
# sqlx-record Pagination Skill
Guide to pagination with Page<T> and PageRequest.
## Triggers
- "pagination", "paginate"
- "page request", "page size"
- "total pages", "has next"
## Overview
`sqlx-record` provides built-in pagination support with the `Page<T>` container and `PageRequest` options.
## PageRequest
Create pagination options with 1-indexed page numbers:
```rust
use sqlx_record::prelude::PageRequest;
// Create request for page 1 with 20 items per page
let request = PageRequest::new(1, 20);
// First page shorthand
let request = PageRequest::first(20);
// Access offset/limit for manual queries
request.offset() // 0 for page 1, 20 for page 2, etc.
request.limit() // page_size
```
## Page<T>
Paginated results container:
```rust
use sqlx_record::prelude::Page;
// Properties
page.items // Vec<T> - items for this page
page.total_count // u64 - total records matching filters
page.page // u32 - current page (1-indexed)
page.page_size // u32 - items per page
// Computed methods
page.total_pages() // u32 - ceil(total_count / page_size)
page.has_next() // bool - page < total_pages
page.has_prev() // bool - page > 1
page.is_empty() // bool - items.is_empty()
page.len() // usize - items.len()
// Transformation
page.map(|item| transform(item)) // Page<U>
page.into_items() // Vec<T>
page.iter() // impl Iterator<Item = &T>
```
## Entity Paginate Method
Generated on all entities:
```rust
pub async fn paginate(
executor,
filters: Vec<Filter>,
index: Option<&str>, // MySQL index hint
order_by: Vec<(&str, bool)>, // (field, is_ascending)
page_request: PageRequest
) -> Result<Page<Self>, Error>
```
## Usage Examples
### Basic Pagination
```rust
use sqlx_record::prelude::*;
// Get first page of 20 users
let page = User::paginate(
&pool,
filters![],
None,
vec![("created_at", false)], // ORDER BY created_at DESC
PageRequest::new(1, 20)
).await?;
println!("Page {} of {}", page.page, page.total_pages());
println!("Showing {} of {} users", page.len(), page.total_count);
for user in page.iter() {
println!("{}: {}", user.id, user.name);
}
```
### With Filters
```rust
// Active users only, page 3
let page = User::paginate(
&pool,
filters![("is_active", true)],
None,
vec![("name", true)], // ORDER BY name ASC
PageRequest::new(3, 10)
).await?;
```
### With Index Hint (MySQL)
```rust
// Use specific index for performance
let page = User::paginate(
&pool,
filters![("status", "active")],
Some("idx_users_status"), // MySQL: USE INDEX(idx_users_status)
vec![("created_at", false)],
PageRequest::new(1, 50)
).await?;
```
### Navigation Logic
```rust
let page = User::paginate(&pool, filters![], None, vec![], PageRequest::new(current, 20)).await?;
if page.has_prev() {
println!("Previous: page {}", page.page - 1);
}
if page.has_next() {
println!("Next: page {}", page.page + 1);
}
```
### Transform Results
```rust
// Convert to DTOs
let dto_page: Page<UserDto> = page.map(|user| UserDto::from(user));
// Or consume items
let items: Vec<User> = page.into_items();
```
## Comparison with Manual Pagination
```rust
// Manual approach (still available)
let offset = (page_num - 1) * page_size;
let items = User::find_ordered_with_limit(
&pool, filters, None, order_by, Some((offset, page_size))
).await?;
let total = User::count(&pool, filters.clone(), None).await?;
// With paginate() - simpler
let page = User::paginate(&pool, filters, None, order_by, PageRequest::new(page_num, page_size)).await?;
```
## Notes
- Page numbers are 1-indexed (page 1 is first page)
- `paginate()` executes two queries: count + select
- For very large tables, consider cursor-based pagination instead
- Index hints only work on MySQL, ignored on Postgres/SQLite

View File

@ -117,11 +117,70 @@ let id = new_uuid(); // Timestamp prefix for better indexing
```toml ```toml
[dependencies] [dependencies]
sqlx-record = { version = "0.2", features = ["mysql", "derive"] } sqlx-record = { version = "0.3", features = ["mysql", "derive"] }
# Database: "mysql", "postgres", or "sqlite" (pick one) # Database: "mysql", "postgres", or "sqlite" (pick one)
# Optional: "derive", "static-validation" # Optional: "derive", "static-validation"
``` ```
## Delete, Soft Delete, Timestamps, Batch Operations
```rust
#[derive(Entity, FromRow)]
struct User {
#[primary_key] id: Uuid,
name: String,
is_active: bool, // Auto-detected for soft delete (is_active = FALSE when deleted)
#[created_at] // Auto-set on insert
created_at: i64,
#[updated_at] // Auto-set on update
updated_at: i64,
}
// Hard delete (always available on all entities)
user.hard_delete(&pool).await?; // DELETE FROM
User::hard_delete_by_id(&pool, &id).await?;
// Soft delete (when is_active or #[soft_delete] field exists)
user.soft_delete(&pool).await?; // is_active = false
User::soft_delete_by_id(&pool, &id).await?;
user.restore(&pool).await?; // is_active = true
// Batch insert
User::insert_many(&pool, &users).await?;
// Upsert (insert or update on conflict)
user.upsert(&pool).await?;
```
## Pagination
```rust
use sqlx_record::prelude::{Page, PageRequest};
let page = User::paginate(&pool, filters![], None,
vec![("name", true)], PageRequest::new(1, 20)).await?;
page.items // Vec<User>
page.total_count // Total matching records
page.total_pages() // Calculated pages
page.has_next() // bool
page.has_prev() // bool
```
## Transaction Helper
```rust
use sqlx_record::transaction;
transaction!(&pool, |tx| {
user.insert(&mut *tx).await?;
order.insert(&mut *tx).await?;
Ok::<_, sqlx::Error>(())
}).await?;
```
## Advanced Updates (UpdateExpr) ## Advanced Updates (UpdateExpr)
```rust ```rust
@ -149,11 +208,23 @@ User::update_by_id(&pool, &id,
## ConnProvider (Flexible Connections) ## ConnProvider (Flexible Connections)
```rust ```rust
use sqlx_record::ConnProvider; use sqlx_record::prelude::ConnProvider;
// Borrowed or owned pool connections // From borrowed connection
let conn = ConnProvider::Borrowed(&pool); let mut conn = pool.acquire().await?;
let users = User::find(&*conn, filters![], None).await?; let mut provider = ConnProvider::from_ref(&mut conn);
// From pool (lazy acquisition)
let mut provider = ConnProvider::from_pool(pool.clone());
// From transaction (operations participate in the transaction)
let mut tx = pool.begin().await?;
let mut provider = ConnProvider::from_tx(&mut tx);
// ... use provider ...
tx.commit().await?;
// Get underlying connection
let conn = provider.get_conn().await?;
``` ```
## Database Differences ## Database Differences

View File

@ -0,0 +1,263 @@
# sqlx-record Delete & Soft Delete Skill
Guide to hard delete and soft delete functionality.
## Triggers
- "soft delete", "soft-delete"
- "hard delete", "permanent delete"
- "is_active", "is_deleted", "deleted"
- "restore", "undelete"
- "delete_by_id", "hard_delete_by_id"
## Hard Delete (Always Generated)
Every Entity gets `hard_delete()` and `hard_delete_by_{pk}()` methods. No configuration needed.
```rust
// Instance method
user.hard_delete(&pool).await?;
// Static method by primary key
User::hard_delete_by_id(&pool, &user_id).await?;
```
**SQL generated:**
```sql
DELETE FROM users WHERE id = ?
```
## Soft Delete
Marks records as deleted without removing them from the database. This enables:
- Recovery of accidentally deleted data
- Audit trails of deletions
- Referential integrity preservation
### Enabling Soft Delete
**Preferred: `is_active` convention** (auto-detected, no attribute needed):
```rust
use sqlx_record::prelude::*;
#[derive(Entity, FromRow)]
#[table_name = "users"]
struct User {
#[primary_key]
id: Uuid,
name: String,
is_active: bool, // Auto-detected: FALSE = deleted, TRUE = active
}
```
**Alternative: `#[soft_delete]` attribute** on any bool field:
```rust
#[derive(Entity, FromRow)]
#[table_name = "users"]
struct User {
#[primary_key]
id: Uuid,
name: String,
#[soft_delete] // Field will be FALSE when deleted
is_active: bool,
}
```
**Legacy: `is_deleted`/`deleted` fields** are also auto-detected (TRUE = deleted).
### Detection Priority
1. Field with `#[soft_delete]` attribute (FALSE = deleted)
2. Field named `is_active` with bool type (FALSE = deleted)
3. Field named `is_deleted` or `deleted` with bool type (TRUE = deleted)
## Generated Methods
### soft_delete() / soft_delete_by_{pk}()
Marks the record as deleted:
```rust
// Instance method
user.soft_delete(&pool).await?;
// Static method by primary key
User::soft_delete_by_id(&pool, &user_id).await?;
```
**SQL generated (is_active convention):**
```sql
UPDATE users SET is_active = FALSE WHERE id = ?
```
### restore() / restore_by_{pk}()
Restores a soft-deleted record:
```rust
// Instance method
user.restore(&pool).await?;
// Static method by primary key
User::restore_by_id(&pool, &user_id).await?;
```
**SQL generated (is_active convention):**
```sql
UPDATE users SET is_active = TRUE WHERE id = ?
```
### soft_delete_field()
Returns the field name:
```rust
let field = User::soft_delete_field(); // "is_active"
```
## Filtering Deleted Records
Soft delete does **NOT** automatically filter `find()` queries. You must add the filter manually:
```rust
// Include only active (non-deleted)
let users = User::find(&pool, filters![("is_active", true)], None).await?;
// Include only deleted (trash view)
let deleted = User::find(&pool, filters![("is_active", false)], None).await?;
// Include all records
let all = User::find(&pool, filters![], None).await?;
```
### Helper Pattern
Create a helper function for consistent filtering:
```rust
impl User {
pub async fn find_active(
pool: &Pool,
mut filters: Vec<Filter<'_>>,
index: Option<&str>
) -> Result<Vec<Self>, sqlx::Error> {
filters.push(Filter::Equal("is_active", true.into()));
Self::find(pool, filters, index).await
}
}
// Usage
let users = User::find_active(&pool, filters![("role", "admin")], None).await?;
```
## Usage Examples
### Basic Flow
```rust
// Create user
let user = User {
id: new_uuid(),
name: "Alice".into(),
is_active: true,
};
user.insert(&pool).await?;
// Soft delete
user.soft_delete(&pool).await?;
// user still exists in DB with is_active = false
// Find won't return deleted users (with proper filter)
let users = User::find(&pool, filters![("is_active", true)], None).await?;
// Alice not in results
// Restore
User::restore_by_id(&pool, &user.id).await?;
// user.is_active = true again
// Hard delete (permanent)
User::hard_delete_by_id(&pool, &user.id).await?;
// Row completely removed from database
```
### With Audit Trail
```rust
use sqlx_record::{transaction, prelude::*};
async fn soft_delete_with_audit(
pool: &Pool,
user_id: &Uuid,
actor_id: &Uuid
) -> Result<(), sqlx::Error> {
transaction!(&pool, |tx| {
// Soft delete the user
User::soft_delete_by_id(&mut *tx, user_id).await?;
// Record the deletion
let change = EntityChange {
id: new_uuid(),
entity_id: *user_id,
action: "soft_delete".into(),
changed_at: chrono::Utc::now().timestamp_millis(),
actor_id: *actor_id,
session_id: Uuid::nil(),
change_set_id: Uuid::nil(),
new_value: None,
};
create_entity_change(&mut *tx, "entity_changes_users", &change).await?;
Ok::<_, sqlx::Error>(())
}).await
}
```
### Cascade Soft Delete
```rust
async fn delete_user_cascade(pool: &Pool, user_id: &Uuid) -> Result<(), sqlx::Error> {
transaction!(&pool, |tx| {
// Soft delete user's orders
let orders = Order::find(&mut *tx, filters![("user_id", user_id)], None).await?;
for order in orders {
order.soft_delete(&mut *tx).await?;
}
// Soft delete user
User::soft_delete_by_id(&mut *tx, user_id).await?;
Ok::<_, sqlx::Error>(())
}).await
}
```
## Database Schema
Recommended column definition:
```sql
-- MySQL
is_active BOOLEAN NOT NULL DEFAULT TRUE
-- PostgreSQL
is_active BOOLEAN NOT NULL DEFAULT TRUE
-- SQLite
is_active INTEGER NOT NULL DEFAULT 1 -- 1=true, 0=false
```
Add an index for efficient filtering:
```sql
CREATE INDEX idx_users_is_active ON users (is_active);
-- Or composite index for common queries
CREATE INDEX idx_users_active_name ON users (is_active, name);
```
## Notes
- Soft delete field must be `bool` type
- The field is included in UpdateForm (can be manually toggled)
- `hard_delete()` / `hard_delete_by_{pk}()` are always available, even on entities with soft delete
- Consider adding `deleted_at: Option<i64>` for deletion timestamps
- For complex filtering, consider database views

View File

@ -0,0 +1,209 @@
# sqlx-record Transaction Skill
Guide to the transaction! macro for ergonomic transactions.
## Triggers
- "transaction", "transactions"
- "commit", "rollback"
- "atomic", "transactional"
## Overview
The `transaction!` macro provides ergonomic transaction handling with automatic commit on success and rollback on error.
## Basic Syntax
```rust
use sqlx_record::transaction;
let result = transaction!(&pool, |tx| {
// Operations using &mut *tx as executor
user.insert(&mut *tx).await?;
order.insert(&mut *tx).await?;
Ok::<_, sqlx::Error>(order.id) // Return value type annotation
}).await?;
```
## Key Points
1. **Automatic commit**: Transaction commits if closure returns `Ok`
2. **Automatic rollback**: Transaction rolls back if closure returns `Err` or panics
3. **Return values**: The closure can return any value wrapped in `Result<T, sqlx::Error>`
4. **Executor access**: Use `&mut *tx` to pass the transaction as an executor
## Usage Examples
### Basic Transaction
```rust
use sqlx_record::{transaction, prelude::*};
async fn create_user_with_profile(pool: &Pool, user: User, profile: Profile) -> Result<Uuid, sqlx::Error> {
transaction!(&pool, |tx| {
let user_id = user.insert(&mut *tx).await?;
let mut profile = profile;
profile.user_id = user_id;
profile.insert(&mut *tx).await?;
Ok::<_, sqlx::Error>(user_id)
}).await
}
```
### Multiple Operations
```rust
async fn transfer_funds(
pool: &Pool,
from_id: &Uuid,
to_id: &Uuid,
amount: i64
) -> Result<(), sqlx::Error> {
transaction!(&pool, |tx| {
// Debit from source
Account::update_by_id(&mut *tx, from_id,
Account::update_form().eval_balance(UpdateExpr::Sub(amount.into()))
).await?;
// Credit to destination
Account::update_by_id(&mut *tx, to_id,
Account::update_form().eval_balance(UpdateExpr::Add(amount.into()))
).await?;
// Create transfer record
let transfer = Transfer {
id: new_uuid(),
from_account: *from_id,
to_account: *to_id,
amount,
created_at: chrono::Utc::now().timestamp_millis(),
};
transfer.insert(&mut *tx).await?;
Ok::<_, sqlx::Error>(())
}).await
}
```
### With Error Handling
```rust
async fn create_order(pool: &Pool, cart: Cart) -> Result<Order, AppError> {
transaction!(&pool, |tx| {
// Verify stock
for item in &cart.items {
let product = Product::get_by_id(&mut *tx, &item.product_id).await?
.ok_or(sqlx::Error::RowNotFound)?;
if product.stock < item.quantity {
return Err(sqlx::Error::Protocol("Insufficient stock".into()));
}
}
// Create order
let order = Order {
id: new_uuid(),
user_id: cart.user_id,
status: OrderStatus::PENDING.into(),
total: cart.total(),
created_at: chrono::Utc::now().timestamp_millis(),
};
order.insert(&mut *tx).await?;
// Create order items and decrement stock
for item in cart.items {
let order_item = OrderItem {
id: new_uuid(),
order_id: order.id,
product_id: item.product_id,
quantity: item.quantity,
price: item.price,
};
order_item.insert(&mut *tx).await?;
Product::update_by_id(&mut *tx, &item.product_id,
Product::update_form().eval_stock(UpdateExpr::Sub(item.quantity.into()))
).await?;
}
Ok::<_, sqlx::Error>(order)
}).await.map_err(AppError::from)
}
```
### Nested Operations (Not Nested Transactions)
```rust
// Helper function that accepts any executor
async fn create_audit_log<'a, E>(executor: E, action: &str, entity_id: Uuid) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = sqlx::MySql>,
{
let log = AuditLog {
id: new_uuid(),
action: action.into(),
entity_id,
created_at: chrono::Utc::now().timestamp_millis(),
};
log.insert(executor).await?;
Ok(())
}
// Use in transaction
transaction!(&pool, |tx| {
user.insert(&mut *tx).await?;
create_audit_log(&mut *tx, "user_created", user.id).await?;
Ok::<_, sqlx::Error>(())
}).await?;
```
## Type Annotation
The closure must have an explicit return type annotation:
```rust
// Correct - with type annotation
Ok::<_, sqlx::Error>(value)
// Also correct
Ok::<i32, sqlx::Error>(42)
// Incorrect - missing annotation (won't compile)
// Ok(value)
```
## Comparison with Manual Transactions
```rust
// Manual approach
let mut tx = pool.begin().await?;
match async {
user.insert(&mut *tx).await?;
order.insert(&mut *tx).await?;
Ok::<_, sqlx::Error>(order.id)
}.await {
Ok(result) => {
tx.commit().await?;
Ok(result)
}
Err(e) => {
tx.rollback().await?;
Err(e)
}
}
// With transaction! macro - cleaner
transaction!(&pool, |tx| {
user.insert(&mut *tx).await?;
order.insert(&mut *tx).await?;
Ok::<_, sqlx::Error>(order.id)
}).await
```
## Notes
- The macro works with all supported databases (MySQL, PostgreSQL, SQLite)
- Transactions use the pool's default isolation level
- For custom isolation levels, use sqlx's native transaction API
- The closure is async - use `.await` for all database operations

5
.gitignore vendored
View File

@ -1,5 +1,6 @@
/target /target
/entity-update_derive/target /sqlx-record-derive/target
/entity-changes-ctl/target /sqlx-record-ctl/target
/mcp/target
.idea .idea
/Cargo.lock /Cargo.lock

View File

@ -17,8 +17,11 @@ sqlx-record/
│ ├── lib.rs # Public API exports, prelude, lookup macros, new_uuid │ ├── lib.rs # Public API exports, prelude, lookup macros, new_uuid
│ ├── models.rs # EntityChange struct, Action enum │ ├── models.rs # EntityChange struct, Action enum
│ ├── repositories.rs # Database query functions for entity changes │ ├── repositories.rs # Database query functions for entity changes
│ ├── value.rs # Type-safe Value enum, bind functions │ ├── value.rs # Type-safe Value enum, UpdateExpr, bind functions
│ ├── filter.rs # Filter enum for query conditions │ ├── filter.rs # Filter enum for query conditions
│ ├── conn_provider.rs # ConnProvider for flexible connection management
│ ├── pagination.rs # Page<T> and PageRequest structs
│ ├── transaction.rs # transaction! macro
│ └── helpers.rs # Utility macros │ └── helpers.rs # Utility macros
├── sqlx-record-derive/ # Procedural macro crate ├── sqlx-record-derive/ # Procedural macro crate
│ └── src/ │ └── src/
@ -28,13 +31,14 @@ sqlx-record/
│ └── src/main.rs │ └── src/main.rs
├── mcp/ # MCP server for documentation/code generation ├── mcp/ # MCP server for documentation/code generation
│ └── src/main.rs # sqlx-record-expert executable │ └── src/main.rs # sqlx-record-expert executable
├── .claude/skills/ # Claude Code skills documentation ├── .claude/skills/sqlx-record/ # Claude Code skills documentation
│ ├── sqlx-record.md # Overview and quick reference │ ├── sqlx-record.md # Overview and quick reference
│ ├── sqlx-entity.md # #[derive(Entity)] detailed guide │ ├── sqlx-entity.md # #[derive(Entity)] detailed guide
│ ├── sqlx-filters.md # Filter system guide │ ├── sqlx-filters.md # Filter system guide
│ ├── sqlx-audit.md # Audit trail guide │ ├── sqlx-audit.md # Audit trail guide
│ ├── sqlx-lookup.md # Lookup tables guide │ ├── sqlx-lookup.md # Lookup tables guide
│ └── sqlx-values.md # Value types guide │ ├── sqlx-values.md # Value types guide
│ └── sqlx-conn-provider.md # Connection provider guide
└── Cargo.toml # Workspace root └── Cargo.toml # Workspace root
``` ```
@ -119,7 +123,7 @@ let id = new_uuid(); // Timestamp prefix (8 bytes) + random (8 bytes)
## Connection Provider ## Connection Provider
Flexible connection management - borrow existing or lazily acquire from pool: Flexible connection management - borrow existing connection, lazily acquire from pool, or use a transaction:
```rust ```rust
use sqlx_record::prelude::ConnProvider; use sqlx_record::prelude::ConnProvider;
@ -130,6 +134,12 @@ let mut provider = ConnProvider::from_ref(&mut conn);
// From pool (lazy acquisition) // From pool (lazy acquisition)
let mut provider = ConnProvider::from_pool(pool.clone()); let mut provider = ConnProvider::from_pool(pool.clone());
// From transaction (operations participate in the transaction)
let mut tx = pool.begin().await?;
let mut provider = ConnProvider::from_tx(&mut tx);
// ... use provider ...
tx.commit().await?;
// Get connection (acquires on first call for Owned variant) // Get connection (acquires on first call for Owned variant)
let conn = provider.get_conn().await?; let conn = provider.get_conn().await?;
``` ```
@ -206,6 +216,15 @@ struct User {
#[field_type("BIGINT")] // SQLx type hint #[field_type("BIGINT")] // SQLx type hint
count: i64, count: i64,
#[soft_delete] // Enables delete/restore/hard_delete methods
is_deleted: bool,
#[created_at] // Auto-set on insert (milliseconds)
created_at: i64,
#[updated_at] // Auto-set on update (milliseconds)
updated_at: i64,
} }
``` ```
@ -213,6 +232,9 @@ struct User {
**Insert:** **Insert:**
- `insert(&pool) -> Result<PkType, Error>` - `insert(&pool) -> Result<PkType, Error>`
- `insert_many(&pool, &[entities]) -> Result<Vec<PkType>, Error>` - Batch insert
- `upsert(&pool) -> Result<PkType, Error>` - Insert or update on PK conflict
- `insert_or_update(&pool) -> Result<PkType, Error>` - Alias for upsert
**Get:** **Get:**
- `get_by_{pk}(&pool, &pk) -> Result<Option<Self>, Error>` - `get_by_{pk}(&pool, &pk) -> Result<Option<Self>, Error>`
@ -225,6 +247,8 @@ struct User {
- `find_ordered(&pool, filters, index, order_by) -> Result<Vec<Self>, Error>` - `find_ordered(&pool, filters, index, order_by) -> Result<Vec<Self>, Error>`
- `find_ordered_with_limit(&pool, filters, index, order_by, offset_limit) -> Result<Vec<Self>, Error>` - `find_ordered_with_limit(&pool, filters, index, order_by, offset_limit) -> Result<Vec<Self>, Error>`
- `count(&pool, filters, index) -> Result<u64, Error>` - `count(&pool, filters, index) -> Result<u64, Error>`
- `paginate(&pool, filters, index, order_by, page_request) -> Result<Page<Self>, Error>`
- `find_partial(&pool, &[fields], filters, index) -> Result<Vec<Row>, Error>` - Select specific columns
**Update:** **Update:**
- `update(&self, &pool, form) -> Result<(), Error>` - `update(&self, &pool, form) -> Result<(), Error>`
@ -252,6 +276,51 @@ struct User {
- `get_version(&pool, &pk) -> Result<Option<VersionType>, Error>` - `get_version(&pool, &pk) -> Result<Option<VersionType>, Error>`
- `get_versions(&pool, &[pk]) -> Result<HashMap<PkType, VersionType>, Error>` - `get_versions(&pool, &[pk]) -> Result<HashMap<PkType, VersionType>, Error>`
**Soft Delete (if #[soft_delete] field exists):**
- `delete(&pool) -> Result<(), Error>` - Sets soft_delete to true
- `delete_by_{pk}(&pool, &pk) -> Result<(), Error>`
- `hard_delete(&pool) -> Result<(), Error>` - Permanently removes row
- `hard_delete_by_{pk}(&pool, &pk) -> Result<(), Error>`
- `restore(&pool) -> Result<(), Error>` - Sets soft_delete to false
- `restore_by_{pk}(&pool, &pk) -> Result<(), Error>`
- `soft_delete_field() -> &'static str` - Returns field name
## Pagination
```rust
use sqlx_record::prelude::*;
// Create page request (1-indexed pages)
let page_request = PageRequest::new(1, 20); // page 1, 20 items
// Get paginated results
let page = User::paginate(&pool, filters![], None, vec![("name", true)], page_request).await?;
// Page<T> properties
page.items // Vec<T> - items for this page
page.total_count // u64 - total matching records
page.page // u32 - current page (1-indexed)
page.page_size // u32 - items per page
page.total_pages() // u32 - calculated total pages
page.has_next() // bool
page.has_prev() // bool
page.is_empty() // bool
page.len() // usize - items on this page
```
## Transaction Helper
```rust
use sqlx_record::transaction;
// Automatically commits on success, rolls back on error
let result = transaction!(&pool, |tx| {
user.insert(&mut *tx).await?;
order.insert(&mut *tx).await?;
Ok::<_, sqlx::Error>(order.id)
}).await?;
```
## Filter API ## Filter API
```rust ```rust

View File

@ -1,9 +1,13 @@
[package] [package]
name = "sqlx-record" name = "sqlx-record"
version = "0.2.0" version.workspace = true
edition = "2021" edition.workspace = true
description = "Entity CRUD and change tracking for SQL databases with SQLx" description = "Entity CRUD and change tracking for SQL databases with SQLx"
[workspace.package]
version = "0.3.7"
edition = "2021"
[dependencies] [dependencies]
sqlx-record-derive = { path = "sqlx-record-derive", optional = true } sqlx-record-derive = { path = "sqlx-record-derive", optional = true }
sqlx = { version = "0.8", features = ["runtime-tokio", "uuid", "chrono", "json"] } sqlx = { version = "0.8", features = ["runtime-tokio", "uuid", "chrono", "json"] }
@ -12,6 +16,7 @@ uuid = { version = "1", features = ["v4"] }
chrono = "0.4" chrono = "0.4"
rand = "0.8" rand = "0.8"
paste = "1.0" paste = "1.0"
rust_decimal = { version = "1", optional = true }
[workspace] [workspace]
members = [ members = [
@ -23,7 +28,8 @@ members = [
[features] [features]
default = [] default = []
derive = ["dep:sqlx-record-derive"] derive = ["dep:sqlx-record-derive"]
static-validation = ["sqlx-record-derive?/static-validation"] static-check = ["sqlx-record-derive?/static-check"]
decimal = ["dep:rust_decimal", "sqlx/rust_decimal"]
# Database backends - user must enable at least one # Database backends - user must enable at least one
mysql = ["sqlx/mysql", "sqlx-record-derive?/mysql"] mysql = ["sqlx/mysql", "sqlx-record-derive?/mysql"]

View File

@ -1,14 +1,15 @@
[package] [package]
name = "sqlx-record-mcp" name = "sqlx-record-mcp"
version = "0.2.0" version.workspace = true
edition = "2021" edition.workspace = true
description = "MCP server providing sqlx-record documentation and code generation" description = "MCP server providing sqlx-record documentation and code generation"
[[bin]] [[bin]]
name = "sqlx-record-expert" name = "sqlx-record-mcp"
path = "src/main.rs" path = "src/main.rs"
[dependencies] [dependencies]
clap = { version = "4", features = ["derive"] }
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
tokio = { version = "1", features = ["full"] } tokio = { version = "1", features = ["full"] }

View File

@ -1,7 +1,13 @@
use clap::Parser;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::{json, Value}; use serde_json::{json, Value};
use std::io::{self, BufRead, Write}; use std::io::{self, BufRead, Write};
#[derive(Parser)]
#[command(name = "sqlx-record-mcp")]
#[command(version, about = "MCP server for sqlx-record documentation and code generation")]
struct Args {}
// ============================================================================ // ============================================================================
// MCP Protocol Types // MCP Protocol Types
// ============================================================================ // ============================================================================
@ -34,17 +40,23 @@ struct JsonRpcError {
// Documentation Content // Documentation Content
// ============================================================================ // ============================================================================
const OVERVIEW: &str = r#"# sqlx-record v0.2.0 const OVERVIEW: &str = r#"# sqlx-record v0.3.0
A Rust library providing derive macros for automatic CRUD operations and comprehensive audit trails for SQL entities. Supports MySQL, PostgreSQL, and SQLite via SQLx. A Rust library providing derive macros for automatic CRUD operations and comprehensive audit trails for SQL entities. Supports MySQL, PostgreSQL, and SQLite via SQLx.
## Features ## Features
- **Derive Macros**: `#[derive(Entity)]` generates 40+ methods for CRUD operations - **Derive Macros**: `#[derive(Entity)]` generates 50+ methods for CRUD operations
- **Multi-Database**: MySQL, PostgreSQL, SQLite with unified API - **Multi-Database**: MySQL, PostgreSQL, SQLite with unified API
- **Audit Trails**: Track who changed what, when, and why - **Audit Trails**: Track who changed what, when, and why
- **Type-Safe Filters**: Composable query building with `Filter` enum - **Type-Safe Filters**: Composable query building with `Filter` enum
- **UpdateExpr**: Advanced updates with arithmetic, CASE/WHEN, conditionals - **UpdateExpr**: Advanced updates with arithmetic, CASE/WHEN, conditionals
- **Hard Delete**: `hard_delete_by_{pk}()` always generated for all entities
- **Soft Deletes**: `#[soft_delete]` or `is_active` convention with soft_delete/restore methods
- **Auto Timestamps**: `#[created_at]`, `#[updated_at]` auto-populated
- **Batch Operations**: `insert_many()`, `upsert()` for efficient bulk operations
- **Pagination**: `Page<T>` with `paginate()` method
- **Transaction Helper**: `transaction!` macro for ergonomic transactions
- **Lookup Tables**: Macros for code/enum generation - **Lookup Tables**: Macros for code/enum generation
- **ConnProvider**: Flexible connection management (borrowed or pooled) - **ConnProvider**: Flexible connection management (borrowed or pooled)
- **Time-Ordered UUIDs**: Better database indexing - **Time-Ordered UUIDs**: Better database indexing
@ -155,6 +167,21 @@ pub async fn update_by_ids(executor, ids: &[Uuid], form: UpdateForm) -> Result<(
pub fn update_form() -> UpdateForm pub fn update_form() -> UpdateForm
``` ```
### Delete (always generated)
```rust
pub async fn hard_delete(&self, executor) -> Result<(), Error>
pub async fn hard_delete_by_id(executor, id: &Uuid) -> Result<(), Error>
```
### Soft Delete (if `is_active` field or `#[soft_delete]` exists)
```rust
pub async fn soft_delete(&self, executor) -> Result<(), Error>
pub async fn soft_delete_by_id(executor, id: &Uuid) -> Result<(), Error>
pub async fn restore(&self, executor) -> Result<(), Error>
pub async fn restore_by_id(executor, id: &Uuid) -> Result<(), Error>
pub const fn soft_delete_field() -> &'static str
```
### Diff (Change Detection) ### Diff (Change Detection)
```rust ```rust
pub fn model_diff(form: &UpdateForm, model: &Self) -> serde_json::Value pub fn model_diff(form: &UpdateForm, model: &Self) -> serde_json::Value
@ -1028,6 +1055,267 @@ let users = User::find(&*provider, filters![("active", true)], None).await?;
``` ```
"#; "#;
const PAGINATION: &str = r#"# Pagination
Built-in pagination support with Page<T> and PageRequest.
## PageRequest
```rust
use sqlx_record::prelude::PageRequest;
// Create request (1-indexed pages)
let request = PageRequest::new(1, 20); // page 1, 20 items
// First page shorthand
let request = PageRequest::first(20);
// For manual queries
request.offset() // (page - 1) * page_size
request.limit() // page_size
```
## Page<T>
```rust
let page = User::paginate(&pool, filters, None, order_by, request).await?;
page.items // Vec<T>
page.total_count // u64
page.page // u32 (current page)
page.page_size // u32
page.total_pages() // ceil(total / page_size)
page.has_next() // page < total_pages
page.has_prev() // page > 1
page.is_empty() // items.is_empty()
page.len() // items.len()
page.map(|t| f(t)) // Page<U>
page.into_items() // Vec<T>
```
## Usage
```rust
let page = User::paginate(
&pool,
filters![("is_active", true)],
Some("idx_users"), // MySQL index hint
vec![("created_at", false)], // ORDER BY created_at DESC
PageRequest::new(1, 20)
).await?;
for user in page.iter() {
println!("{}", user.name);
}
if page.has_next() {
let next = User::paginate(&pool, filters, None, order, PageRequest::new(page.page + 1, 20)).await?;
}
```
"#;
const SOFT_DELETE: &str = r#"# Delete Methods
## Hard Delete (always generated)
Every Entity gets `hard_delete` and `hard_delete_by_{pk}` methods:
```rust
// Instance method
user.hard_delete(&pool).await?;
// Static method by primary key
User::hard_delete_by_id(&pool, &user_id).await?;
```
**SQL generated:**
```sql
DELETE FROM users WHERE id = ?
```
## Soft Delete
Mark records as deleted without removing from database.
### Enable
Convention: an `is_active` bool field is auto-detected (preferred):
```rust
#[derive(Entity, FromRow)]
struct User {
#[primary_key]
id: Uuid,
is_active: bool, // Auto-detected: FALSE = deleted
}
```
Or use `#[soft_delete]` on any bool field:
```rust
#[derive(Entity, FromRow)]
struct User {
#[primary_key]
id: Uuid,
#[soft_delete] // Field will be FALSE when deleted
is_active: bool,
}
```
Auto-detection also works for `is_deleted` or `deleted` bool fields (TRUE = deleted).
### Generated Methods
```rust
// Soft delete (set is_active = FALSE)
user.soft_delete(&pool).await?;
User::soft_delete_by_id(&pool, &id).await?;
// Restore (set is_active = TRUE)
user.restore(&pool).await?;
User::restore_by_id(&pool, &id).await?;
// Field name
User::soft_delete_field() // "is_active"
```
### Filtering
Soft delete does NOT auto-filter. Add filter manually:
```rust
// Only active (non-deleted)
let users = User::find(&pool, filters![("is_active", true)], None).await?;
// Only deleted
let deleted = User::find(&pool, filters![("is_active", false)], None).await?;
// All records
let all = User::find(&pool, filters![], None).await?;
```
"#;
const BATCH_OPS: &str = r#"# Batch Operations
Efficient bulk insert and upsert operations.
## insert_many()
Insert multiple entities in a single query:
```rust
let users = vec![
User { id: new_uuid(), name: "Alice".into() },
User { id: new_uuid(), name: "Bob".into() },
];
let ids = User::insert_many(&pool, &users).await?;
```
SQL: `INSERT INTO users (id, name) VALUES (?, ?), (?, ?)`
## upsert() / insert_or_update()
Insert or update on primary key conflict:
```rust
user.upsert(&pool).await?;
// or
user.insert_or_update(&pool).await?;
```
SQL (MySQL):
```sql
INSERT INTO users (id, name) VALUES (?, ?)
ON DUPLICATE KEY UPDATE name = VALUES(name)
```
SQL (PostgreSQL/SQLite):
```sql
INSERT INTO users (id, name) VALUES ($1, $2)
ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name
```
## Use Cases
```rust
// Sync external data
for product in external_products {
Product { id: product.id, name: product.name }.upsert(&pool).await?;
}
// Chunked batch insert
for chunk in users.chunks(1000) {
User::insert_many(&pool, chunk).await?;
}
```
"#;
const TRANSACTIONS: &str = r#"# Transactions
Ergonomic transaction handling with automatic commit/rollback.
## transaction! Macro
```rust
use sqlx_record::transaction;
let result = transaction!(&pool, |tx| {
user.insert(&mut *tx).await?;
order.insert(&mut *tx).await?;
Ok::<_, sqlx::Error>(order.id)
}).await?;
```
- Commits on `Ok`
- Rolls back on `Err` or panic
- Use `&mut *tx` as executor
## Examples
### Transfer Funds
```rust
transaction!(&pool, |tx| {
Account::update_by_id(&mut *tx, &from_id,
Account::update_form().eval_balance(UpdateExpr::Sub(amount.into()))
).await?;
Account::update_by_id(&mut *tx, &to_id,
Account::update_form().eval_balance(UpdateExpr::Add(amount.into()))
).await?;
Ok::<_, sqlx::Error>(())
}).await?;
```
### With Return Value
```rust
let order_id = transaction!(&pool, |tx| {
let user = User { id: new_uuid(), name: "Alice".into() };
user.insert(&mut *tx).await?;
let order = Order { id: new_uuid(), user_id: user.id, total: 100 };
order.insert(&mut *tx).await?;
Ok::<_, sqlx::Error>(order.id)
}).await?;
```
## Type Annotation
Must include return type annotation:
```rust
Ok::<_, sqlx::Error>(value) // Correct
Ok::<i32, sqlx::Error>(42) // Also correct
```
"#;
const CLI_TOOL: &str = r#"# sqlx-record-ctl CLI const CLI_TOOL: &str = r#"# sqlx-record-ctl CLI
Command-line tool for managing audit tables. Command-line tool for managing audit tables.
@ -1483,7 +1771,7 @@ fn handle_list_tools() -> Value {
"properties": { "properties": {
"feature": { "feature": {
"type": "string", "type": "string",
"enum": ["overview", "derive", "filters", "values", "lookup", "audit", "update_form", "update_expr", "conn_provider", "databases", "uuid", "cli", "examples"], "enum": ["overview", "derive", "filters", "values", "lookup", "audit", "update_form", "update_expr", "conn_provider", "databases", "uuid", "cli", "examples", "pagination", "soft_delete", "batch_ops", "transactions"],
"description": "Feature to explain" "description": "Feature to explain"
} }
}, },
@ -1542,6 +1830,10 @@ fn handle_call_tool(params: &Value) -> Value {
"uuid" => NEW_UUID, "uuid" => NEW_UUID,
"cli" => CLI_TOOL, "cli" => CLI_TOOL,
"examples" => EXAMPLES, "examples" => EXAMPLES,
"pagination" => PAGINATION,
"soft_delete" => SOFT_DELETE,
"batch_ops" => BATCH_OPS,
"transactions" => TRANSACTIONS,
_ => OVERVIEW, _ => OVERVIEW,
}; };
json!({ json!({
@ -1641,6 +1933,30 @@ fn handle_list_resources() -> Value {
"name": "Examples", "name": "Examples",
"description": "Complete usage examples", "description": "Complete usage examples",
"mimeType": "text/markdown" "mimeType": "text/markdown"
},
{
"uri": "sqlx-record://docs/pagination",
"name": "Pagination",
"description": "Page<T> and PageRequest for paginated queries",
"mimeType": "text/markdown"
},
{
"uri": "sqlx-record://docs/soft_delete",
"name": "Soft Delete",
"description": "#[soft_delete] attribute and delete/restore methods",
"mimeType": "text/markdown"
},
{
"uri": "sqlx-record://docs/batch_ops",
"name": "Batch Operations",
"description": "insert_many() and upsert() for bulk operations",
"mimeType": "text/markdown"
},
{
"uri": "sqlx-record://docs/transactions",
"name": "Transactions",
"description": "transaction! macro for ergonomic transactions",
"mimeType": "text/markdown"
} }
] ]
}) })
@ -1663,6 +1979,10 @@ fn handle_read_resource(params: &Value) -> Value {
"sqlx-record://docs/uuid" => NEW_UUID, "sqlx-record://docs/uuid" => NEW_UUID,
"sqlx-record://docs/cli" => CLI_TOOL, "sqlx-record://docs/cli" => CLI_TOOL,
"sqlx-record://docs/examples" => EXAMPLES, "sqlx-record://docs/examples" => EXAMPLES,
"sqlx-record://docs/pagination" => PAGINATION,
"sqlx-record://docs/soft_delete" => SOFT_DELETE,
"sqlx-record://docs/batch_ops" => BATCH_OPS,
"sqlx-record://docs/transactions" => TRANSACTIONS,
_ => "Resource not found", _ => "Resource not found",
}; };
@ -1680,6 +2000,8 @@ fn handle_read_resource(params: &Value) -> Value {
// ============================================================================ // ============================================================================
fn main() { fn main() {
let _args = Args::parse();
let stdin = io::stdin(); let stdin = io::stdin();
let mut stdout = io::stdout(); let mut stdout = io::stdout();

View File

@ -1,7 +1,7 @@
[package] [package]
name = "sqlx-record-ctl" name = "sqlx-record-ctl"
version = "0.2.0" version.workspace = true
edition = "2021" edition.workspace = true
description = "CLI tool for managing sqlx-record audit tables" description = "CLI tool for managing sqlx-record audit tables"
[dependencies] [dependencies]

View File

@ -1,7 +1,7 @@
[package] [package]
name = "sqlx-record-derive" name = "sqlx-record-derive"
version = "0.2.0" version.workspace = true
edition = "2021" edition.workspace = true
description = "Derive macros for sqlx-record" description = "Derive macros for sqlx-record"
[dependencies] [dependencies]
@ -13,7 +13,7 @@ futures = "0.3"
[features] [features]
default = [] default = []
static-validation = [] static-check = []
mysql = [] mysql = []
postgres = [] postgres = []
sqlite = [] sqlite = []

View File

@ -17,6 +17,9 @@ struct EntityField {
type_override: Option<String>, type_override: Option<String>,
is_primary_key: bool, is_primary_key: bool,
is_version_field: bool, is_version_field: bool,
is_soft_delete: bool,
is_created_at: bool,
is_updated_at: bool,
} }
/// Parse a string attribute that can be either: /// Parse a string attribute that can be either:
@ -46,7 +49,7 @@ pub fn derive_update(input: TokenStream) -> TokenStream {
derive_entity_internal(input) derive_entity_internal(input)
} }
#[proc_macro_derive(Entity, attributes(rename, table_name, primary_key, version, field_type))] #[proc_macro_derive(Entity, attributes(rename, table_name, primary_key, version, field_type, soft_delete, created_at, updated_at))]
pub fn derive_entity(input: TokenStream) -> TokenStream { pub fn derive_entity(input: TokenStream) -> TokenStream {
derive_entity_internal(input) derive_entity_internal(input)
} }
@ -59,7 +62,7 @@ fn db_type() -> TokenStream2 {
} }
#[cfg(feature = "sqlite")] #[cfg(feature = "sqlite")]
{ {
quote! { sqlx::Sqlite } return quote! { sqlx::Sqlite };
} }
#[cfg(feature = "mysql")] #[cfg(feature = "mysql")]
{ {
@ -79,7 +82,7 @@ fn db_arguments() -> TokenStream2 {
} }
#[cfg(feature = "sqlite")] #[cfg(feature = "sqlite")]
{ {
quote! { sqlx::sqlite::SqliteArguments<'static> } return quote! { sqlx::sqlite::SqliteArguments<'q> };
} }
#[cfg(feature = "mysql")] #[cfg(feature = "mysql")]
{ {
@ -96,13 +99,21 @@ fn table_quote() -> &'static str {
#[cfg(feature = "postgres")] #[cfg(feature = "postgres")]
{ "\"" } { "\"" }
#[cfg(feature = "sqlite")] #[cfg(feature = "sqlite")]
{ "\"" } { return "\""; }
#[cfg(feature = "mysql")] #[cfg(feature = "mysql")]
{ "`" } { "`" }
#[cfg(not(any(feature = "mysql", feature = "postgres", feature = "sqlite")))] #[cfg(not(any(feature = "mysql", feature = "postgres", feature = "sqlite")))]
{ "`" } { "`" }
} }
/// Get compile-time placeholder for static-check SQL
fn static_placeholder(index: usize) -> String {
#[cfg(feature = "postgres")]
{ format!("${}", index) }
#[cfg(not(feature = "postgres"))]
{ let _ = index; "?".to_string() }
}
fn derive_entity_internal(input: TokenStream) -> TokenStream { fn derive_entity_internal(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as DeriveInput); let input = parse_macro_input!(input as DeriveInput);
let name = &input.ident; let name = &input.ident;
@ -117,17 +128,32 @@ fn derive_entity_internal(input: TokenStream) -> TokenStream {
.or_else(|| fields.iter().find(|f| f.ident == "id" || f.ident == "code")) .or_else(|| fields.iter().find(|f| f.ident == "id" || f.ident == "code"))
.expect("Struct must have a primary key field, either explicitly specified or named 'id' or 'code'"); .expect("Struct must have a primary key field, either explicitly specified or named 'id' or 'code'");
let (has_created_at, has_updated_at) = check_timestamp_fields(&fields); // Check for timestamp fields - either by attribute or by name
let has_created_at = fields.iter().any(|f| f.is_created_at) ||
fields.iter().any(|f| f.ident == "created_at" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("i64")));
let has_updated_at = fields.iter().any(|f| f.is_updated_at) ||
fields.iter().any(|f| f.ident == "updated_at" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("i64")));
let version_field = fields.iter() let version_field = fields.iter()
.find(|f| f.is_version_field) .find(|f| f.is_version_field)
.or_else(|| fields.iter().find(|&f| is_version_field(f))); .or_else(|| fields.iter().find(|&f| is_version_field(f)));
// Find soft delete field (by attribute or by name convention)
// Convention: `is_active` (FALSE = deleted), `is_deleted`/`deleted` (TRUE = deleted)
let soft_delete_field = fields.iter()
.find(|f| f.is_soft_delete)
.or_else(|| fields.iter().find(|f| {
(f.ident == "is_active" || f.ident == "is_deleted" || f.ident == "deleted") &&
matches!(&f.ty, Type::Path(p) if p.path.is_ident("bool"))
}));
// Generate all implementations // Generate all implementations
let insert_impl = generate_insert_impl(&name, &table_name, primary_key, &fields, has_created_at, has_updated_at, &impl_generics, &ty_generics, &where_clause); let insert_impl = generate_insert_impl(&name, &table_name, primary_key, &fields, has_created_at, has_updated_at, &impl_generics, &ty_generics, &where_clause);
let get_impl = generate_get_impl(&name, &table_name, primary_key, version_field, &fields, &impl_generics, &ty_generics, &where_clause); let get_impl = generate_get_impl(&name, &table_name, primary_key, version_field, soft_delete_field, &fields, &impl_generics, &ty_generics, &where_clause);
let update_impl = generate_update_impl(&name, &update_form_name, &table_name, &fields, primary_key, version_field, &impl_generics, &ty_generics, &where_clause); let update_impl = generate_update_impl(&name, &update_form_name, &table_name, &fields, primary_key, version_field, has_updated_at, &impl_generics, &ty_generics, &where_clause);
let diff_impl = generate_diff_impl(&name, &update_form_name, &fields, primary_key, version_field, &impl_generics, &ty_generics, &where_clause); let diff_impl = generate_diff_impl(&name, &update_form_name, &fields, primary_key, version_field, &impl_generics, &ty_generics, &where_clause);
let delete_impl = generate_delete_impl(&name, &table_name, primary_key, &impl_generics, &ty_generics, &where_clause);
let soft_delete_impl = generate_soft_delete_impl(&name, &table_name, primary_key, soft_delete_field, &impl_generics, &ty_generics, &where_clause);
let pk_type = &primary_key.ty; let pk_type = &primary_key.ty;
let pk_field_name = &primary_key.ident; let pk_field_name = &primary_key.ident;
@ -137,6 +163,8 @@ fn derive_entity_internal(input: TokenStream) -> TokenStream {
#get_impl #get_impl
#update_impl #update_impl
#diff_impl #diff_impl
#delete_impl
#soft_delete_impl
impl #impl_generics #name #ty_generics #where_clause { impl #impl_generics #name #ty_generics #where_clause {
pub const fn table_name() -> &'static str { pub const fn table_name() -> &'static str {
@ -200,6 +228,12 @@ fn parse_fields(input: &DeriveInput) -> Vec<EntityField> {
.any(|attr| attr.path().is_ident("primary_key")); .any(|attr| attr.path().is_ident("primary_key"));
let is_version_field = field.attrs.iter() let is_version_field = field.attrs.iter()
.any(|attr| attr.path().is_ident("version")); .any(|attr| attr.path().is_ident("version"));
let is_soft_delete = field.attrs.iter()
.any(|attr| attr.path().is_ident("soft_delete"));
let is_created_at = field.attrs.iter()
.any(|attr| attr.path().is_ident("created_at"));
let is_updated_at = field.attrs.iter()
.any(|attr| attr.path().is_ident("updated_at"));
EntityField { EntityField {
ident, ident,
@ -209,6 +243,9 @@ fn parse_fields(input: &DeriveInput) -> Vec<EntityField> {
type_override, type_override,
is_primary_key, is_primary_key,
is_version_field, is_version_field,
is_soft_delete,
is_created_at,
is_updated_at,
} }
}).collect() }).collect()
} }
@ -216,16 +253,6 @@ fn parse_fields(input: &DeriveInput) -> Vec<EntityField> {
} }
} }
fn check_timestamp_fields(fields: &[EntityField]) -> (bool, bool) {
let has_created_at = fields.iter()
.any(|f| f.ident == "created_at" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("i64")));
let has_updated_at = fields.iter()
.any(|f| f.ident == "updated_at" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("i64")));
(has_created_at, has_updated_at)
}
fn is_version_field(f: &EntityField) -> bool { fn is_version_field(f: &EntityField) -> bool {
f.ident == "version" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("u64") || f.ident == "version" && matches!(&f.ty, Type::Path(p) if p.path.is_ident("u64") ||
p.path.is_ident("u32") || p.path.is_ident("i64") || p.path.is_ident("i32")) p.path.is_ident("u32") || p.path.is_ident("i64") || p.path.is_ident("i32"))
@ -244,8 +271,10 @@ fn generate_insert_impl(
where_clause: &Option<&WhereClause>, where_clause: &Option<&WhereClause>,
) -> TokenStream2 { ) -> TokenStream2 {
let db_names: Vec<_> = fields.iter().map(|f| &f.db_name).collect(); let db_names: Vec<_> = fields.iter().map(|f| &f.db_name).collect();
let field_idents: Vec<_> = fields.iter().map(|f| &f.ident).collect();
let tq = table_quote(); let tq = table_quote();
let db = db_type(); let db = db_type();
let pk_db_name = &primary_key.db_name;
let bindings: Vec<_> = fields.iter().map(|f| { let bindings: Vec<_> = fields.iter().map(|f| {
let ident = &f.ident; let ident = &f.ident;
@ -282,6 +311,88 @@ fn generate_insert_impl(
Ok(self.#pk_field.clone()) Ok(self.#pk_field.clone())
} }
/// Insert multiple entities in a single statement
pub async fn insert_many<'a, E>(executor: E, entities: &[Self]) -> Result<Vec<#pk_type>, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db>,
Self: Clone,
{
if entities.is_empty() {
return Ok(vec![]);
}
let field_count = #field_count;
let mut placeholders = Vec::with_capacity(entities.len());
let mut current_idx = 1usize;
for _ in entities {
let row_placeholders: String = (0..field_count)
.map(|_| {
let ph = ::sqlx_record::prelude::placeholder(current_idx);
current_idx += 1;
ph
})
.collect::<Vec<_>>()
.join(", ");
placeholders.push(format!("({})", row_placeholders));
}
let insert_stmt = format!(
"INSERT INTO {}{}{} ({}) VALUES {}",
#tq, #table_name, #tq,
vec![#(#db_names),*].join(", "),
placeholders.join(", ")
);
let mut query = sqlx::query(&insert_stmt);
for entity in entities {
#(query = query.bind(&entity.#field_idents);)*
}
query.execute(executor).await?;
Ok(entities.iter().map(|e| e.#pk_field.clone()).collect())
}
/// Insert or update on primary key conflict (upsert)
pub async fn upsert<'a, E>(&self, executor: E) -> Result<#pk_type, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db>,
{
let placeholders: String = (1..=#field_count)
.map(|i| ::sqlx_record::prelude::placeholder(i))
.collect::<Vec<_>>()
.join(", ");
let non_pk_fields: Vec<&str> = vec![#(#db_names),*]
.into_iter()
.filter(|f| *f != #pk_db_name)
.collect();
let upsert_stmt = ::sqlx_record::prelude::build_upsert_stmt(
#table_name,
&[#(#db_names),*],
#pk_db_name,
&non_pk_fields,
&placeholders,
);
sqlx::query(&upsert_stmt)
#(.bind(#bindings))*
.execute(executor)
.await?;
Ok(self.#pk_field.clone())
}
/// Alias for upsert
pub async fn insert_or_update<'a, E>(&self, executor: E) -> Result<#pk_type, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db>,
{
self.upsert(executor).await
}
} }
} }
} }
@ -315,6 +426,7 @@ fn generate_get_impl(
table_name: &str, table_name: &str,
primary_key: &EntityField, primary_key: &EntityField,
version_field: Option<&EntityField>, version_field: Option<&EntityField>,
_soft_delete_field: Option<&EntityField>, // Reserved for future auto-filtering
fields: &[EntityField], fields: &[EntityField],
impl_generics: &ImplGenerics, impl_generics: &ImplGenerics,
ty_generics: &TypeGenerics, ty_generics: &TypeGenerics,
@ -423,14 +535,18 @@ fn generate_get_impl(
quote! {} quote! {}
}; };
// Check if static-validation feature is enabled at macro expansion time let field_list = fields.iter().map(|f| f.db_name.clone()).collect::<Vec<_>>();
let use_static_validation = cfg!(feature = "static-validation");
// Check if static-check feature is enabled at macro expansion time
let use_static_validation = cfg!(feature = "static-check");
let get_by_impl = if use_static_validation { let get_by_impl = if use_static_validation {
// Static validation: use sqlx::query_as! with compile-time checked SQL
let select_stmt = format!( let select_stmt = format!(
r#"SELECT DISTINCT {} FROM {}{}{} WHERE {} = $1"#, r#"SELECT DISTINCT {} FROM {}{}{} WHERE {} = {}"#,
select_fields.clone().collect::<Vec<_>>().join(", "), select_fields.clone().collect::<Vec<_>>().join(", "),
tq, table_name, tq, pk_db_field_name tq, table_name, tq, pk_db_field_name,
static_placeholder(1)
); );
quote! { quote! {
pub async fn #get_by_func<'a, E>(executor: E, #pk_field: &#pk_type) -> Result<Option<Self>, sqlx::Error> pub async fn #get_by_func<'a, E>(executor: E, #pk_field: &#pk_type) -> Result<Option<Self>, sqlx::Error>
@ -464,8 +580,7 @@ fn generate_get_impl(
} }
} }
} else { } else {
let field_list = fields.iter().map(|f| f.db_name.clone()).collect::<Vec<_>>(); // Runtime: use sqlx::query_as with dynamic SQL
quote! { quote! {
pub async fn #get_by_func<'a, E>(executor: E, #pk_field: &#pk_type) -> Result<Option<Self>, sqlx::Error> pub async fn #get_by_func<'a, E>(executor: E, #pk_field: &#pk_type) -> Result<Option<Self>, sqlx::Error>
where where
@ -611,13 +726,7 @@ fn generate_get_impl(
String::new() String::new()
}; };
// Index hints are MySQL-specific let index_clause = ::sqlx_record::prelude::build_index_clause(index);
#[cfg(feature = "mysql")]
let index_clause = index
.map(|idx| format!("USE INDEX ({})", idx))
.unwrap_or_default();
#[cfg(not(feature = "mysql"))]
let index_clause = { let _ = index; String::new() };
//Filter order_by fields to only those managed //Filter order_by fields to only those managed
let fields = Self::select_fields().into_iter().collect::<::std::collections::HashSet<_>>(); let fields = Self::select_fields().into_iter().collect::<::std::collections::HashSet<_>>();
@ -681,23 +790,8 @@ fn generate_get_impl(
String::new() String::new()
}; };
// Index hints are MySQL-specific let index_clause = ::sqlx_record::prelude::build_index_clause(index);
#[cfg(feature = "mysql")] let count_expr = ::sqlx_record::prelude::build_count_expr(#pk_db_field_name);
let index_clause = index
.map(|idx| format!("USE INDEX ({})", idx))
.unwrap_or_default();
#[cfg(not(feature = "mysql"))]
let index_clause = { let _ = index; String::new() };
// Use database-appropriate COUNT syntax
#[cfg(feature = "postgres")]
let count_expr = format!("COUNT({})::BIGINT", #pk_db_field_name);
#[cfg(feature = "sqlite")]
let count_expr = format!("COUNT({})", #pk_db_field_name);
#[cfg(feature = "mysql")]
let count_expr = format!("CAST(COUNT({}) AS SIGNED)", #pk_db_field_name);
#[cfg(not(any(feature = "mysql", feature = "postgres", feature = "sqlite")))]
let count_expr = format!("COUNT({})", #pk_db_field_name);
let query = format!( let query = format!(
r#"SELECT {} FROM {}{}{} {} {}"#, r#"SELECT {} FROM {}{}{} {} {}"#,
@ -725,6 +819,84 @@ fn generate_get_impl(
Ok(count) Ok(count)
} }
/// Paginate results with total count
pub async fn paginate<'a, E>(
executor: E,
filters: Vec<::sqlx_record::prelude::Filter<'a>>,
index: Option<&str>,
order_by: Vec<(&str, bool)>,
page_request: ::sqlx_record::prelude::PageRequest,
) -> Result<::sqlx_record::prelude::Page<Self>, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db> + Copy,
{
// Get total count first
let total_count = Self::count(executor, filters.clone(), index).await?;
// Get page items
let items = Self::find_ordered_with_limit(
executor,
filters,
index,
order_by,
Some((page_request.offset(), page_request.limit())),
).await?;
Ok(::sqlx_record::prelude::Page::new(
items,
total_count,
page_request.page,
page_request.page_size,
))
}
/// Select specific fields only (returns raw rows)
/// Use `sqlx::Row` trait to access fields: `row.try_get::<String, _>("name")?`
pub async fn find_partial<'a, E>(
executor: E,
select_fields: &[&str],
filters: Vec<::sqlx_record::prelude::Filter<'a>>,
index: Option<&str>,
) -> Result<Vec<<#db as sqlx::Database>::Row>, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db>,
{
use ::sqlx_record::prelude::{Filter, bind_values};
// Validate fields exist
let valid_fields: ::std::collections::HashSet<_> = Self::select_fields().into_iter().collect();
let selected: Vec<_> = select_fields.iter()
.filter(|f| valid_fields.contains(*f))
.copied()
.collect();
if selected.is_empty() {
return Ok(vec![]);
}
let (where_conditions, values) = Filter::build_where_clause(&filters);
let where_clause = if !where_conditions.is_empty() {
format!("WHERE {}", where_conditions)
} else {
String::new()
};
let index_clause = ::sqlx_record::prelude::build_index_clause(index);
let query = format!(
"SELECT DISTINCT {} FROM {}{}{} {} {}",
selected.join(", "),
#tq, #table_name, #tq,
index_clause,
where_clause,
);
let db_query = sqlx::query(&query);
bind_values(db_query, &values)
.fetch_all(executor)
.await
}
} }
} }
} }
@ -745,6 +917,7 @@ fn generate_update_impl(
fields: &[EntityField], fields: &[EntityField],
primary_key: &EntityField, primary_key: &EntityField,
version_field: Option<&EntityField>, version_field: Option<&EntityField>,
has_updated_at: bool,
impl_generics: &ImplGenerics, impl_generics: &ImplGenerics,
ty_generics: &TypeGenerics, ty_generics: &TypeGenerics,
where_clause: &Option<&WhereClause>, where_clause: &Option<&WhereClause>,
@ -833,6 +1006,20 @@ fn generate_update_impl(
quote! {} quote! {}
}; };
// Auto-update updated_at timestamp (only if not manually set)
let updated_at_increment = if has_updated_at {
quote! {
// Only auto-set updated_at if not already set in form or via expression
if self.updated_at.is_none() && !self._exprs.contains_key("updated_at") {
parts.push(format!("updated_at = {}", ::sqlx_record::prelude::placeholder(idx)));
values.push(::sqlx_record::prelude::Value::Int64(chrono::Utc::now().timestamp_millis()));
idx += 1;
}
}
} else {
quote! {}
};
quote! { quote! {
/// Update form with support for simple value updates and complex expressions /// Update form with support for simple value updates and complex expressions
pub struct #update_form_name #ty_generics #where_clause { pub struct #update_form_name #ty_generics #where_clause {
@ -909,6 +1096,7 @@ fn generate_update_impl(
)* )*
#version_increment #version_increment
#updated_at_increment
(parts.join(", "), values) (parts.join(", "), values)
} }
@ -921,40 +1109,31 @@ fn generate_update_impl(
/// Bind all form values to query in correct order. /// Bind all form values to query in correct order.
/// Handles both simple values and expression values, respecting expression precedence. /// Handles both simple values and expression values, respecting expression precedence.
pub fn bind_all_values(&self, mut query: sqlx::query::Query<'_, #db, #db_args>) /// Uses Value enum for proper type handling of Option<T> fields.
-> sqlx::query::Query<'_, #db, #db_args> pub fn bind_all_values<'q>(&'q self, mut query: sqlx::query::Query<'q, #db, #db_args>)
-> sqlx::query::Query<'q, #db, #db_args>
{ {
#( // Use update_stmt_with_values to get properly converted values
// Expression takes precedence over simple value // This handles nested Options (Option<Option<T>>) correctly
if let Some(expr) = self._exprs.get(#db_names) { let (_, values) = self.update_stmt_with_values();
let (_, expr_values) = expr.build_sql(#db_names, 1); for value in values {
for value in expr_values {
query = ::sqlx_record::prelude::bind_value_owned(query, value); query = ::sqlx_record::prelude::bind_value_owned(query, value);
} }
} else if let Some(ref value) = self.#field_idents {
query = query.bind(value);
}
)*
query query
} }
/// Legacy binding method - only binds simple Option values (ignores expressions). /// Legacy binding method - binds values through the Value enum for proper type handling.
/// For backward compatibility. New code should use bind_all_values(). /// For backward compatibility. New code should use bind_all_values().
pub fn bind_form_values<'q>(&'q self, mut query: sqlx::query::Query<'q, #db, #db_args>) pub fn bind_form_values<'q>(&'q self, mut query: sqlx::query::Query<'q, #db, #db_args>)
-> sqlx::query::Query<'q, #db, #db_args> -> sqlx::query::Query<'q, #db, #db_args>
{ {
if self._exprs.is_empty() { // Always use Value-based binding to properly handle Option<T> fields
// No expressions, use simple binding // This ensures nested Options (Option<Option<T>>) are unwrapped correctly
#( let (_, values) = self.update_stmt_with_values();
if let Some(ref value) = self.#field_idents { for value in values {
query = query.bind(value); query = ::sqlx_record::prelude::bind_value_owned(query, value);
} }
)*
query query
} else {
// Has expressions, use full binding
self.bind_all_values(query)
}
} }
/// Check if this form uses any expressions /// Check if this form uses any expressions
@ -1085,6 +1264,7 @@ fn generate_diff_impl(
pub fn to_update_form(&self) -> #update_form_name #ty_generics { pub fn to_update_form(&self) -> #update_form_name #ty_generics {
#update_form_name { #update_form_name {
#(#field_idents: Some(self.#field_idents.clone()),)* #(#field_idents: Some(self.#field_idents.clone()),)*
_exprs: std::collections::HashMap::new(),
} }
} }
@ -1219,6 +1399,191 @@ fn generate_diff_impl(
Ok(()) Ok(())
} }
/// Update all records matching the filter conditions
/// Returns the number of affected rows
pub async fn update_by_filter<'a, E>(
executor: E,
filters: Vec<::sqlx_record::prelude::Filter<'a>>,
form: #update_form_name,
) -> Result<u64, sqlx::Error>
where
E: sqlx::Executor<'a, Database=#db>,
{
use ::sqlx_record::prelude::{Filter, bind_values};
if filters.is_empty() {
// Require at least one filter to prevent accidental table-wide updates
return Err(sqlx::Error::Protocol(
"update_by_filter requires at least one filter to prevent accidental table-wide updates".to_string()
));
}
let (update_stmt, form_values) = form.update_stmt_with_values();
if update_stmt.is_empty() {
return Ok(0);
}
let form_param_count = form_values.len();
let (where_conditions, filter_values) = Filter::build_where_clause_with_offset(&filters, form_param_count + 1);
let query_str = format!(
r#"UPDATE {}{}{} SET {} WHERE {}"#,
#tq, Self::table_name(), #tq,
update_stmt,
where_conditions,
);
// Combine form values and filter values
let mut all_values = form_values;
all_values.extend(filter_values);
let query = sqlx::query(&query_str);
let result = bind_values(query, &all_values)
.execute(executor)
.await?;
Ok(result.rows_affected())
}
}
}
}
// Generate delete implementation - always generated for ALL entities
fn generate_delete_impl(
name: &Ident,
table_name: &str,
primary_key: &EntityField,
impl_generics: &ImplGenerics,
ty_generics: &TypeGenerics,
where_clause: &Option<&WhereClause>,
) -> TokenStream2 {
let pk_field = &primary_key.ident;
let pk_type = &primary_key.ty;
let pk_db_name = &primary_key.db_name;
let db = db_type();
let tq = table_quote();
let pk_field_name = primary_key.ident.to_string();
let hard_delete_by_func = format_ident!("hard_delete_by_{}", pk_field_name);
quote! {
impl #impl_generics #name #ty_generics #where_clause {
/// Hard delete - permanently removes the row from database
pub async fn hard_delete<'a, E>(&self, executor: E) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
Self::#hard_delete_by_func(executor, &self.#pk_field).await
}
/// Hard delete by primary key - permanently removes the row from database
pub async fn #hard_delete_by_func<'a, E>(executor: E, #pk_field: &#pk_type) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
let query = format!(
"DELETE FROM {}{}{} WHERE {} = {}",
#tq, #table_name, #tq,
#pk_db_name, ::sqlx_record::prelude::placeholder(1)
);
sqlx::query(&query).bind(#pk_field).execute(executor).await?;
Ok(())
}
}
}
}
// Generate soft delete implementation
fn generate_soft_delete_impl(
name: &Ident,
table_name: &str,
primary_key: &EntityField,
soft_delete_field: Option<&EntityField>,
impl_generics: &ImplGenerics,
ty_generics: &TypeGenerics,
where_clause: &Option<&WhereClause>,
) -> TokenStream2 {
let Some(sd_field) = soft_delete_field else {
return quote! {};
};
let pk_field = &primary_key.ident;
let pk_type = &primary_key.ty;
let pk_db_name = &primary_key.db_name;
let sd_db_name = &sd_field.db_name;
let db = db_type();
let tq = table_quote();
let pk_field_name = primary_key.ident.to_string();
let soft_delete_by_func = format_ident!("soft_delete_by_{}", pk_field_name);
let restore_by_func = format_ident!("restore_by_{}", pk_field_name);
// Determine semantics based on field name and attribute:
// - #[soft_delete] attribute: field should be FALSE when deleted (user convention)
// - `is_active` by name: FALSE when deleted, TRUE when active
// - `is_deleted`/`deleted` by name: TRUE when deleted, FALSE when active
let sd_field_name = sd_field.ident.to_string();
let is_inverted = sd_field.is_soft_delete || sd_field_name == "is_active";
let (delete_value, restore_value) = if is_inverted {
("FALSE", "TRUE")
} else {
("TRUE", "FALSE")
};
quote! {
impl #impl_generics #name #ty_generics #where_clause {
/// Soft delete - marks record as deleted without removing from database
pub async fn soft_delete<'a, E>(&self, executor: E) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
Self::#soft_delete_by_func(executor, &self.#pk_field).await
}
/// Soft delete by primary key
pub async fn #soft_delete_by_func<'a, E>(executor: E, #pk_field: &#pk_type) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
let query = format!(
"UPDATE {}{}{} SET {} = {} WHERE {} = {}",
#tq, #table_name, #tq,
#sd_db_name, #delete_value,
#pk_db_name, ::sqlx_record::prelude::placeholder(1)
);
sqlx::query(&query).bind(#pk_field).execute(executor).await?;
Ok(())
}
/// Restore a soft-deleted record
pub async fn restore<'a, E>(&self, executor: E) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
Self::#restore_by_func(executor, &self.#pk_field).await
}
/// Restore by primary key
pub async fn #restore_by_func<'a, E>(executor: E, #pk_field: &#pk_type) -> Result<(), sqlx::Error>
where
E: sqlx::Executor<'a, Database = #db>,
{
let query = format!(
"UPDATE {}{}{} SET {} = {} WHERE {} = {}",
#tq, #table_name, #tq,
#sd_db_name, #restore_value,
#pk_db_name, ::sqlx_record::prelude::placeholder(1)
);
sqlx::query(&query).bind(#pk_field).execute(executor).await?;
Ok(())
}
/// Get the soft delete field name
pub const fn soft_delete_field() -> &'static str {
#sd_db_name
}
} }
} }
} }

View File

@ -1,13 +1,13 @@
use sqlx::pool::PoolConnection; use sqlx::pool::PoolConnection;
#[cfg(feature = "mysql")] #[cfg(feature = "mysql")]
use sqlx::{MySql, MySqlPool}; use sqlx::{MySql, MySqlConnection, MySqlPool, Transaction};
#[cfg(feature = "postgres")] #[cfg(feature = "postgres")]
use sqlx::{Postgres, PgPool}; use sqlx::{Postgres, PgConnection, PgPool, Transaction};
#[cfg(feature = "sqlite")] #[cfg(feature = "sqlite")]
use sqlx::{Sqlite, SqlitePool}; use sqlx::{Sqlite, SqliteConnection, SqlitePool, Transaction};
// ============================================================================ // ============================================================================
// MySQL Implementation // MySQL Implementation
@ -24,6 +24,10 @@ pub enum ConnProvider<'a> {
pool: MySqlPool, pool: MySqlPool,
conn: Option<PoolConnection<MySql>>, conn: Option<PoolConnection<MySql>>,
}, },
/// Stores a reference to a transaction
Transaction {
tx: &'a mut Transaction<'static, MySql>,
},
} }
#[cfg(feature = "mysql")] #[cfg(feature = "mysql")]
@ -38,18 +42,25 @@ impl<'a> ConnProvider<'a> {
ConnProvider::Owned { pool, conn: None } ConnProvider::Owned { pool, conn: None }
} }
/// Create a ConnProvider from a borrowed transaction reference
pub fn from_tx(tx: &'a mut Transaction<'static, MySql>) -> Self {
ConnProvider::Transaction { tx }
}
/// Get a mutable reference to the underlying connection. /// Get a mutable reference to the underlying connection.
/// For borrowed connections, returns the reference directly. /// For borrowed connections, returns the reference directly.
/// For owned connections, lazily acquires from pool on first call. /// For owned connections, lazily acquires from pool on first call.
pub async fn get_conn(&mut self) -> Result<&mut PoolConnection<MySql>, sqlx::Error> { /// For transactions, returns the transaction's underlying connection.
pub async fn get_conn(&mut self) -> Result<&mut MySqlConnection, sqlx::Error> {
match self { match self {
ConnProvider::Borrowed { conn } => Ok(conn), ConnProvider::Borrowed { conn } => Ok(&mut **conn),
ConnProvider::Owned { pool, conn } => { ConnProvider::Owned { pool, conn } => {
if conn.is_none() { if conn.is_none() {
*conn = Some(pool.acquire().await?); *conn = Some(pool.acquire().await?);
} }
Ok(conn.as_mut().unwrap()) Ok(&mut **conn.as_mut().unwrap())
} }
ConnProvider::Transaction { tx } => Ok(&mut **tx),
} }
} }
} }
@ -69,6 +80,10 @@ pub enum ConnProvider<'a> {
pool: PgPool, pool: PgPool,
conn: Option<PoolConnection<Postgres>>, conn: Option<PoolConnection<Postgres>>,
}, },
/// Stores a reference to a transaction
Transaction {
tx: &'a mut Transaction<'static, Postgres>,
},
} }
#[cfg(feature = "postgres")] #[cfg(feature = "postgres")]
@ -83,18 +98,25 @@ impl<'a> ConnProvider<'a> {
ConnProvider::Owned { pool, conn: None } ConnProvider::Owned { pool, conn: None }
} }
/// Create a ConnProvider from a borrowed transaction reference
pub fn from_tx(tx: &'a mut Transaction<'static, Postgres>) -> Self {
ConnProvider::Transaction { tx }
}
/// Get a mutable reference to the underlying connection. /// Get a mutable reference to the underlying connection.
/// For borrowed connections, returns the reference directly. /// For borrowed connections, returns the reference directly.
/// For owned connections, lazily acquires from pool on first call. /// For owned connections, lazily acquires from pool on first call.
pub async fn get_conn(&mut self) -> Result<&mut PoolConnection<Postgres>, sqlx::Error> { /// For transactions, returns the transaction's underlying connection.
pub async fn get_conn(&mut self) -> Result<&mut PgConnection, sqlx::Error> {
match self { match self {
ConnProvider::Borrowed { conn } => Ok(conn), ConnProvider::Borrowed { conn } => Ok(&mut **conn),
ConnProvider::Owned { pool, conn } => { ConnProvider::Owned { pool, conn } => {
if conn.is_none() { if conn.is_none() {
*conn = Some(pool.acquire().await?); *conn = Some(pool.acquire().await?);
} }
Ok(conn.as_mut().unwrap()) Ok(&mut **conn.as_mut().unwrap())
} }
ConnProvider::Transaction { tx } => Ok(&mut **tx),
} }
} }
} }
@ -114,6 +136,10 @@ pub enum ConnProvider<'a> {
pool: SqlitePool, pool: SqlitePool,
conn: Option<PoolConnection<Sqlite>>, conn: Option<PoolConnection<Sqlite>>,
}, },
/// Stores a reference to a transaction
Transaction {
tx: &'a mut Transaction<'static, Sqlite>,
},
} }
#[cfg(feature = "sqlite")] #[cfg(feature = "sqlite")]
@ -128,18 +154,25 @@ impl<'a> ConnProvider<'a> {
ConnProvider::Owned { pool, conn: None } ConnProvider::Owned { pool, conn: None }
} }
/// Create a ConnProvider from a borrowed transaction reference
pub fn from_tx(tx: &'a mut Transaction<'static, Sqlite>) -> Self {
ConnProvider::Transaction { tx }
}
/// Get a mutable reference to the underlying connection. /// Get a mutable reference to the underlying connection.
/// For borrowed connections, returns the reference directly. /// For borrowed connections, returns the reference directly.
/// For owned connections, lazily acquires from pool on first call. /// For owned connections, lazily acquires from pool on first call.
pub async fn get_conn(&mut self) -> Result<&mut PoolConnection<Sqlite>, sqlx::Error> { /// For transactions, returns the transaction's underlying connection.
pub async fn get_conn(&mut self) -> Result<&mut SqliteConnection, sqlx::Error> {
match self { match self {
ConnProvider::Borrowed { conn } => Ok(conn), ConnProvider::Borrowed { conn } => Ok(&mut **conn),
ConnProvider::Owned { pool, conn } => { ConnProvider::Owned { pool, conn } => {
if conn.is_none() { if conn.is_none() {
*conn = Some(pool.acquire().await?); *conn = Some(pool.acquire().await?);
} }
Ok(conn.as_mut().unwrap()) Ok(&mut **conn.as_mut().unwrap())
} }
ConnProvider::Transaction { tx } => Ok(&mut **tx),
} }
} }
} }

View File

@ -111,6 +111,121 @@ pub fn placeholder(index: usize) -> String {
} }
} }
/// Returns the table quote character for the current database
#[inline]
pub fn table_quote() -> &'static str {
#[cfg(feature = "mysql")]
{ "`" }
#[cfg(feature = "postgres")]
{ "\"" }
#[cfg(feature = "sqlite")]
{ "\"" }
#[cfg(not(any(feature = "mysql", feature = "postgres", feature = "sqlite")))]
{ "`" }
}
/// Builds an index hint clause (MySQL-specific, empty for other databases)
#[inline]
pub fn build_index_clause(index: Option<&str>) -> String {
#[cfg(feature = "mysql")]
{
index.map(|idx| format!("USE INDEX ({})", idx)).unwrap_or_default()
}
#[cfg(not(feature = "mysql"))]
{
let _ = index;
String::new()
}
}
/// Builds a COUNT expression appropriate for the database backend
#[inline]
pub fn build_count_expr(field: &str) -> String {
#[cfg(feature = "postgres")]
{
format!("COUNT({})::BIGINT", field)
}
#[cfg(feature = "sqlite")]
{
format!("COUNT({})", field)
}
#[cfg(feature = "mysql")]
{
format!("CAST(COUNT({}) AS SIGNED)", field)
}
#[cfg(not(any(feature = "mysql", feature = "postgres", feature = "sqlite")))]
{
format!("COUNT({})", field)
}
}
/// Builds an upsert statement for the current database backend
pub fn build_upsert_stmt(
table_name: &str,
all_fields: &[&str],
pk_field: &str,
non_pk_fields: &[&str],
placeholders: &str,
) -> String {
let tq = table_quote();
let fields_str = all_fields.join(", ");
#[cfg(feature = "mysql")]
{
let _ = pk_field; // Not used in MySQL ON DUPLICATE KEY syntax
let update_clause = non_pk_fields
.iter()
.map(|f| format!("{} = VALUES({})", f, f))
.collect::<Vec<_>>()
.join(", ");
format!(
"INSERT INTO {}{}{} ({}) VALUES ({}) ON DUPLICATE KEY UPDATE {}",
tq, table_name, tq, fields_str, placeholders, update_clause
)
}
#[cfg(feature = "postgres")]
{
let update_clause = non_pk_fields
.iter()
.map(|f| format!("{} = EXCLUDED.{}", f, f))
.collect::<Vec<_>>()
.join(", ");
format!(
"INSERT INTO {}{}{} ({}) VALUES ({}) ON CONFLICT ({}) DO UPDATE SET {}",
tq, table_name, tq, fields_str, placeholders, pk_field, update_clause
)
}
#[cfg(feature = "sqlite")]
{
let update_clause = non_pk_fields
.iter()
.map(|f| format!("{} = excluded.{}", f, f))
.collect::<Vec<_>>()
.join(", ");
format!(
"INSERT INTO {}{}{} ({}) VALUES ({}) ON CONFLICT({}) DO UPDATE SET {}",
tq, table_name, tq, fields_str, placeholders, pk_field, update_clause
)
}
#[cfg(not(any(feature = "mysql", feature = "postgres", feature = "sqlite")))]
{
let _ = pk_field; // Not used in MySQL ON DUPLICATE KEY syntax
// Fallback to MySQL syntax
let update_clause = non_pk_fields
.iter()
.map(|f| format!("{} = VALUES({})", f, f))
.collect::<Vec<_>>()
.join(", ");
format!(
"INSERT INTO {}{}{} ({}) VALUES ({}) ON DUPLICATE KEY UPDATE {}",
tq, table_name, tq, fields_str, placeholders, update_clause
)
}
}
impl Filter<'_> { impl Filter<'_> {
/// Returns the number of bind parameters this filter will use /// Returns the number of bind parameters this filter will use
pub fn param_count(&self) -> usize { pub fn param_count(&self) -> usize {

View File

@ -8,6 +8,11 @@ mod helpers;
mod value; mod value;
mod filter; mod filter;
mod conn_provider; mod conn_provider;
mod pagination;
mod transaction;
pub use pagination::{Page, PageRequest};
// transaction! macro is exported via #[macro_export] in transaction.rs
// Re-export the sqlx_record_derive module on feature flag // Re-export the sqlx_record_derive module on feature flag
#[cfg(feature = "derive")] #[cfg(feature = "derive")]
@ -174,7 +179,9 @@ pub mod prelude {
pub use crate::{filter_or, filter_and, filters, update_entity_func}; pub use crate::{filter_or, filter_and, filters, update_entity_func};
pub use crate::{filter_or as or, filter_and as and}; pub use crate::{filter_or as or, filter_and as and};
pub use crate::values; pub use crate::values;
pub use crate::{new_uuid, lookup_table, lookup_options}; pub use crate::{new_uuid, lookup_table, lookup_options, transaction};
pub use crate::pagination::{Page, PageRequest};
pub use crate::conn_provider::*;
#[cfg(any(feature = "mysql", feature = "postgres", feature = "sqlite"))] #[cfg(any(feature = "mysql", feature = "postgres", feature = "sqlite"))]
pub use crate::conn_provider::ConnProvider; pub use crate::conn_provider::ConnProvider;

108
src/pagination.rs Normal file
View File

@ -0,0 +1,108 @@
/// Paginated result container
#[derive(Debug, Clone)]
pub struct Page<T> {
/// Items for this page
pub items: Vec<T>,
/// Total count of all matching items
pub total_count: u64,
/// Current page number (1-indexed)
pub page: u32,
/// Items per page
pub page_size: u32,
}
impl<T> Page<T> {
pub fn new(items: Vec<T>, total_count: u64, page: u32, page_size: u32) -> Self {
Self { items, total_count, page, page_size }
}
/// Total number of pages
pub fn total_pages(&self) -> u32 {
if self.page_size == 0 {
return 0;
}
((self.total_count as f64) / (self.page_size as f64)).ceil() as u32
}
/// Whether there is a next page
pub fn has_next(&self) -> bool {
self.page < self.total_pages()
}
/// Whether there is a previous page
pub fn has_prev(&self) -> bool {
self.page > 1
}
/// Check if page is empty
pub fn is_empty(&self) -> bool {
self.items.is_empty()
}
/// Number of items on this page
pub fn len(&self) -> usize {
self.items.len()
}
/// Map items to a different type
pub fn map<U, F: FnMut(T) -> U>(self, f: F) -> Page<U> {
Page {
items: self.items.into_iter().map(f).collect(),
total_count: self.total_count,
page: self.page,
page_size: self.page_size,
}
}
/// Iterator over items
pub fn iter(&self) -> impl Iterator<Item = &T> {
self.items.iter()
}
/// Take ownership of items
pub fn into_items(self) -> Vec<T> {
self.items
}
}
impl<T> IntoIterator for Page<T> {
type Item = T;
type IntoIter = std::vec::IntoIter<T>;
fn into_iter(self) -> Self::IntoIter {
self.items.into_iter()
}
}
/// Pagination request options
#[derive(Debug, Clone, Default)]
pub struct PageRequest {
/// Page number (1-indexed, minimum 1)
pub page: u32,
/// Items per page
pub page_size: u32,
}
impl PageRequest {
pub fn new(page: u32, page_size: u32) -> Self {
Self {
page: page.max(1),
page_size,
}
}
/// Calculate SQL OFFSET (0-indexed)
pub fn offset(&self) -> u32 {
if self.page <= 1 { 0 } else { (self.page - 1) * self.page_size }
}
/// Calculate SQL LIMIT
pub fn limit(&self) -> u32 {
self.page_size
}
/// First page
pub fn first(page_size: u32) -> Self {
Self::new(1, page_size)
}
}

43
src/transaction.rs Normal file
View File

@ -0,0 +1,43 @@
/// Transaction macro for ergonomic transaction handling.
///
/// Automatically commits on success, rolls back on error.
///
/// # Example
/// ```ignore
/// use sqlx_record::transaction;
///
/// let result = transaction!(&pool, |tx| {
/// user.insert(&mut *tx).await?;
/// order.insert(&mut *tx).await?;
/// Ok::<_, sqlx::Error>(order.id)
/// }).await?;
/// ```
#[macro_export]
macro_rules! transaction {
($pool:expr, |$tx:ident| $body:expr) => {{
async {
let mut $tx = $pool.begin().await?;
let result: Result<_, sqlx::Error> = async { $body }.await;
match result {
Ok(value) => {
$tx.commit().await?;
Ok(value)
}
Err(e) => {
// Rollback happens automatically on tx drop, but we can be explicit
let _ = $tx.rollback().await;
Err(e)
}
}
}
}};
}
#[cfg(test)]
mod tests {
#[test]
fn test_macro_compiles() {
// Just verify the macro syntax is valid
let _ = stringify!(transaction!(&pool, |tx| { Ok::<_, sqlx::Error>(()) }));
}
}

View File

@ -1,5 +1,5 @@
use sqlx::query::{Query, QueryAs, QueryScalar}; use sqlx::query::{Query, QueryAs, QueryScalar};
use sqlx::types::chrono::{NaiveDate, NaiveDateTime}; use sqlx::types::chrono::{NaiveDate, NaiveDateTime, NaiveTime};
use crate::filter::placeholder; use crate::filter::placeholder;
// Database type alias based on enabled feature // Database type alias based on enabled feature
@ -34,6 +34,7 @@ pub type Arguments_<'q> = sqlx::postgres::PgArguments;
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub enum Value { pub enum Value {
Null,
Int8(i8), Int8(i8),
Uint8(u8), Uint8(u8),
Int16(i16), Int16(i16),
@ -42,12 +43,18 @@ pub enum Value {
Uint32(u32), Uint32(u32),
Int64(i64), Int64(i64),
Uint64(u64), Uint64(u64),
Float32(f32),
Float64(f64),
VecU8(Vec<u8>), VecU8(Vec<u8>),
String(String), String(String),
Bool(bool), Bool(bool),
Uuid(uuid::Uuid), Uuid(uuid::Uuid),
NaiveDate(NaiveDate), NaiveDate(NaiveDate),
NaiveDateTime(NaiveDateTime), NaiveDateTime(NaiveDateTime),
NaiveTime(NaiveTime),
Json(serde_json::Value),
#[cfg(feature = "decimal")]
Decimal(rust_decimal::Decimal),
} }
/// Expression for column updates beyond simple value assignment. /// Expression for column updates beyond simple value assignment.
@ -249,10 +256,12 @@ impl UpdateExpr {
pub type SqlValue = Value; pub type SqlValue = Value;
// MySQL supports unsigned integers natively // MySQL supports unsigned integers natively
// Note: UUID is bound as bytes for BINARY(16) column compatibility
#[cfg(feature = "mysql")] #[cfg(feature = "mysql")]
macro_rules! bind_value { macro_rules! bind_value {
($query:expr, $value: expr) => {{ ($query:expr, $value: expr) => {{
let query = match $value { let query = match $value {
Value::Null => $query.bind(None::<String>),
Value::Int8(v) => $query.bind(v), Value::Int8(v) => $query.bind(v),
Value::Uint8(v) => $query.bind(v), Value::Uint8(v) => $query.bind(v),
Value::Int16(v) => $query.bind(v), Value::Int16(v) => $query.bind(v),
@ -261,12 +270,18 @@ macro_rules! bind_value {
Value::Uint32(v) => $query.bind(v), Value::Uint32(v) => $query.bind(v),
Value::Int64(v) => $query.bind(v), Value::Int64(v) => $query.bind(v),
Value::Uint64(v) => $query.bind(v), Value::Uint64(v) => $query.bind(v),
Value::Float32(v) => $query.bind(v),
Value::Float64(v) => $query.bind(v),
Value::VecU8(v) => $query.bind(v), Value::VecU8(v) => $query.bind(v),
Value::String(v) => $query.bind(v), Value::String(v) => $query.bind(v),
Value::Bool(v) => $query.bind(v), Value::Bool(v) => $query.bind(v),
Value::Uuid(v) => $query.bind(v), Value::Uuid(v) => $query.bind(v),
Value::NaiveDate(v) => $query.bind(v), Value::NaiveDate(v) => $query.bind(v),
Value::NaiveDateTime(v) => $query.bind(v), Value::NaiveDateTime(v) => $query.bind(v),
Value::NaiveTime(v) => $query.bind(v),
Value::Json(v) => $query.bind(v),
#[cfg(feature = "decimal")]
Value::Decimal(v) => $query.bind(v),
}; };
query query
}}; }};
@ -277,6 +292,7 @@ macro_rules! bind_value {
macro_rules! bind_value { macro_rules! bind_value {
($query:expr, $value: expr) => {{ ($query:expr, $value: expr) => {{
let query = match $value { let query = match $value {
Value::Null => $query.bind(None::<String>),
Value::Int8(v) => $query.bind(v), Value::Int8(v) => $query.bind(v),
Value::Uint8(v) => $query.bind(*v as i16), Value::Uint8(v) => $query.bind(*v as i16),
Value::Int16(v) => $query.bind(v), Value::Int16(v) => $query.bind(v),
@ -285,12 +301,18 @@ macro_rules! bind_value {
Value::Uint32(v) => $query.bind(*v as i64), Value::Uint32(v) => $query.bind(*v as i64),
Value::Int64(v) => $query.bind(v), Value::Int64(v) => $query.bind(v),
Value::Uint64(v) => $query.bind(*v as i64), Value::Uint64(v) => $query.bind(*v as i64),
Value::Float32(v) => $query.bind(v),
Value::Float64(v) => $query.bind(v),
Value::VecU8(v) => $query.bind(v), Value::VecU8(v) => $query.bind(v),
Value::String(v) => $query.bind(v), Value::String(v) => $query.bind(v),
Value::Bool(v) => $query.bind(v), Value::Bool(v) => $query.bind(v),
Value::Uuid(v) => $query.bind(v), Value::Uuid(v) => $query.bind(v),
Value::NaiveDate(v) => $query.bind(v), Value::NaiveDate(v) => $query.bind(v),
Value::NaiveDateTime(v) => $query.bind(v), Value::NaiveDateTime(v) => $query.bind(v),
Value::NaiveTime(v) => $query.bind(v),
Value::Json(v) => $query.bind(v),
#[cfg(feature = "decimal")]
Value::Decimal(v) => $query.bind(v),
}; };
query query
}}; }};
@ -309,10 +331,13 @@ pub fn bind_values<'q>(query: Query<'q, DB, Arguments_<'q>>, values: &'q [Value]
#[cfg(any(feature = "mysql", feature = "postgres", feature = "sqlite"))] #[cfg(any(feature = "mysql", feature = "postgres", feature = "sqlite"))]
pub fn bind_value_owned<'q>(query: Query<'q, DB, Arguments_<'q>>, value: Value) -> Query<'q, DB, Arguments_<'q>> { pub fn bind_value_owned<'q>(query: Query<'q, DB, Arguments_<'q>>, value: Value) -> Query<'q, DB, Arguments_<'q>> {
match value { match value {
Value::Null => query.bind(None::<String>),
Value::Int8(v) => query.bind(v), Value::Int8(v) => query.bind(v),
Value::Int16(v) => query.bind(v), Value::Int16(v) => query.bind(v),
Value::Int32(v) => query.bind(v), Value::Int32(v) => query.bind(v),
Value::Int64(v) => query.bind(v), Value::Int64(v) => query.bind(v),
Value::Float32(v) => query.bind(v),
Value::Float64(v) => query.bind(v),
#[cfg(feature = "mysql")] #[cfg(feature = "mysql")]
Value::Uint8(v) => query.bind(v), Value::Uint8(v) => query.bind(v),
#[cfg(feature = "mysql")] #[cfg(feature = "mysql")]
@ -335,6 +360,10 @@ pub fn bind_value_owned<'q>(query: Query<'q, DB, Arguments_<'q>>, value: Value)
Value::Uuid(v) => query.bind(v), Value::Uuid(v) => query.bind(v),
Value::NaiveDate(v) => query.bind(v), Value::NaiveDate(v) => query.bind(v),
Value::NaiveDateTime(v) => query.bind(v), Value::NaiveDateTime(v) => query.bind(v),
Value::NaiveTime(v) => query.bind(v),
Value::Json(v) => query.bind(v),
#[cfg(feature = "decimal")]
Value::Decimal(v) => query.bind(v),
} }
} }
@ -530,6 +559,79 @@ impl From<&NaiveDateTime> for Value {
} }
} }
// New type implementations
impl From<f32> for Value {
fn from(value: f32) -> Self {
Value::Float32(value)
}
}
impl From<&f32> for Value {
fn from(value: &f32) -> Self {
Value::Float32(*value)
}
}
impl From<f64> for Value {
fn from(value: f64) -> Self {
Value::Float64(value)
}
}
impl From<&f64> for Value {
fn from(value: &f64) -> Self {
Value::Float64(*value)
}
}
impl From<NaiveTime> for Value {
fn from(value: NaiveTime) -> Self {
Value::NaiveTime(value)
}
}
impl From<&NaiveTime> for Value {
fn from(value: &NaiveTime) -> Self {
Value::NaiveTime(*value)
}
}
impl From<serde_json::Value> for Value {
fn from(value: serde_json::Value) -> Self {
Value::Json(value)
}
}
impl From<&serde_json::Value> for Value {
fn from(value: &serde_json::Value) -> Self {
Value::Json(value.clone())
}
}
#[cfg(feature = "decimal")]
impl From<rust_decimal::Decimal> for Value {
fn from(value: rust_decimal::Decimal) -> Self {
Value::Decimal(value)
}
}
#[cfg(feature = "decimal")]
impl From<&rust_decimal::Decimal> for Value {
fn from(value: &rust_decimal::Decimal) -> Self {
Value::Decimal(*value)
}
}
// Option<T> implementations - convert None to Value::Null
impl<T: Into<Value>> From<Option<T>> for Value {
fn from(value: Option<T>) -> Self {
match value {
Some(v) => v.into(),
None => Value::Null,
}
}
}
pub trait BindValues<'q> { pub trait BindValues<'q> {
type Output; type Output;