Sqlite-Worker is a Python package providing a thread-safe interface for SQLite database operations. It ensures safe concurrent access to SQLite databases and simplifies executing database queries from different threads.
Perfect for: Multi-threaded applications β’ Web APIs β’ Background workers β’ Data logging β’ Task queues
- Thread-safe SQLite database operations
- Queue-based query execution
- Simple and easy-to-use API
- Initialization actions executed once upon database connection
- Regular commits for continuous query streams
- Database migrations with version control and rollback support
- ORM-like CRUD methods for simplified database operations
- Observable queries with hooks/events for real-time notifications
- Transaction support with manual and context manager control
- Auto-reconnection logic for handling database locks and connection failures
To install, run:
pip3 install sqlite-worker| Feature | Standard SQLite | sqlite-worker |
|---|---|---|
| Thread Safety | β Requires manual locking | β Built-in thread safety |
| Queue Management | β Manual implementation | β Automatic queue handling |
| Connection Pool | β Manual management | β Managed internally |
| Transaction API | β Context managers | |
| ORM Methods | β Raw SQL only | β CRUD helpers included |
| Hooks/Events | β Not available | β Observable queries |
| Migrations | β Manual tracking | β Built-in versioning |
| Auto-reconnect | β Manual handling | β Automatic retry logic |
| Learning Curve | Medium | Low |
Performance Comparison:
sqlite-worker provides consistent performance across concurrent operations while maintaining thread safety.
To create a basic instance of Sqlite3Worker by specifying the path to your SQLite database file:
from sqlite_worker import SqliteWorker
worker = SqliteWorker("/path/to/your/database.db")Create a SqliteWorker instance with initialization actions (such as setting pragmas):
from sqlite_worker import SqliteWorker
init_actions = [
"PRAGMA journal_mode=WAL;",
"PRAGMA synchronous=NORMAL;",
"PRAGMA temp_store=MEMORY;"
]
worker = SqliteWorker("/path/to/your/database.db", execute_init=init_actions)Create a SqliteWorker instance with initialization actions and set a maximum query count for regular commits:
from sqlite_worker import SqliteWorker
init_actions = [
"PRAGMA journal_mode=WAL;",
"PRAGMA synchronous=NORMAL;",
"PRAGMA temp_store=MEMORY;"
]
worker = SqliteWorker("/path/to/your/database.db", execute_init=init_actions, max_count=50)Enable auto-reconnection with custom retry settings:
worker = SqliteWorker(
"/path/to/your/database.db",
auto_reconnect=True,
max_retries=5,
retry_delay=2.0
)worker.execute("CREATE TABLE example (id INTEGER PRIMARY KEY, name TEXT)")worker.execute("INSERT INTO example (name) VALUES (?)", ("Alice",))token = worker.execute("SELECT * FROM example")
results = worker.fetch_results(token)
print(results)The worker provides high-level methods for common database operations:
token = worker.insert('users', {
'name': 'Alice',
'email': '[email protected]',
'age': 30
})
worker.fetch_results(token)token = worker.update('users',
{'age': 31}, # New values
{'name': 'Alice'} # Conditions
)
worker.fetch_results(token)token = worker.delete('users', {'name': 'Alice'})
worker.fetch_results(token)# Select all columns
token = worker.select('users')
results = worker.fetch_results(token)
# Select specific columns with conditions
token = worker.select('users',
columns=['name', 'email'],
conditions={'age': 30},
order_by='name ASC',
limit=10
)
results = worker.fetch_results(token)worker.begin_transaction()
try:
worker.execute("INSERT INTO accounts (balance) VALUES (?)", (100,))
worker.execute("INSERT INTO accounts (balance) VALUES (?)", (200,))
worker.commit_transaction()
except Exception as e:
worker.rollback_transaction()
raisewith worker.transaction():
worker.execute("INSERT INTO accounts (balance) VALUES (?)", (100,))
worker.execute("INSERT INTO accounts (balance) VALUES (?)", (200,))
# Automatically commits on success, rolls back on exceptionRegister callbacks to be notified when queries are executed:
def on_insert(query, values):
print(f"Insert executed: {query} with {values}")
def on_query(query, values):
print(f"Query executed: {query}")
# Register hooks
worker.register_hook('on_insert', on_insert)
worker.register_hook('on_query', on_query)
# Hooks will be triggered when queries execute
worker.execute("INSERT INTO users (name) VALUES (?)", ("Bob",))
# Unregister when done
worker.unregister_hook('on_insert', on_insert)Available hook types:
on_query- Triggered for all querieson_insert- Triggered for INSERT querieson_update- Triggered for UPDATE querieson_delete- Triggered for DELETE querieson_select- Triggered for SELECT queries
migration_sql = """
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL
);
CREATE INDEX idx_products_name ON products(name);
"""
# Apply migration
success = worker.apply_migration('001', 'create_products_table', migration_sql)
if success:
print("Migration applied successfully")rollback_sql = """
DROP INDEX idx_products_name;
DROP TABLE products;
"""
success = worker.rollback_migration('001', rollback_sql)
if success:
print("Migration rolled back successfully")migrations = worker.get_applied_migrations()
for version, name, applied_at in migrations:
print(f"Version {version}: {name} (applied at {applied_at})")After completing all database operations, close the worker to ensure proper cleanup:
worker.close()Explore comprehensive examples in the examples/ directory:
- FastAPI Integration - Build REST APIs with thread-safe database operations
- Batch Processing - Handle high-volume transactions efficiently
- Task Queue System - Implement distributed task scheduling
- Multi-threaded Logging - High-performance concurrent logging
- E-commerce Platform - Complete shopping cart and order system
- Query Optimization - Advanced performance tuning techniques
See examples/framework_integrations/ for:
- Flask - Simple blog application
- Django - Integration patterns with Django ORM
- Streamlit - Interactive analytics dashboard
Jump-start your project with ready-to-use templates in templates/:
- FastAPI Starter - Complete REST API template
- Jupyter Notebook - Data analysis template
- CLI Tool - Command-line interface template
- π Documentation: Check the examples and README
- π¬ Discussions: Ask questions in GitHub Discussions
- π Bug Reports: Submit issues using our bug report template
- π‘ Feature Requests: Suggest improvements using our feature request template
We welcome contributions! Here's how you can help:
- π Report Bugs - Found an issue? Report it
- π‘ Suggest Features - Have an idea? Share it
- π Improve Documentation - Help others understand better
- π§ Submit Pull Requests - Fix bugs or add features
- β Star the Project - Show your support!
New to the project? Look for issues labeled good first issue - these are perfect for first-time contributors!
Please read our Contributing Guide before submitting pull requests.
- GitHub Discussions: Share your use cases and get help
- Issue Tracker: Report bugs and request features
- Pull Requests: Contribute code improvements
Thank you to all our contributors! π
This project is licensed under the MIT License - see the LICENSE file for details.
Special thanks to Johannes Ahlmann for their valuable suggestion on initializing actions and implementing regular commits.
