Metadata-Version: 2.1
Name: borgstore
Version: 0.2.0
Summary: key/value store
Author-email: Thomas Waldmann <tw@waldmann-edv.de>
License: BSD
Project-URL: Homepage, https://github.com/borgbackup/borgstore
Keywords: kv,key/value,store
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: POSIX
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.9
Description-Content-Type: text/x-rst
License-File: LICENSE.rst
Requires-Dist: requests>=2.25.1
Provides-Extra: sftp
Requires-Dist: paramiko>=1.9.1; extra == "sftp"
Provides-Extra: s3
Requires-Dist: boto3; extra == "s3"
Provides-Extra: none

BorgStore
=========

A key/value store implementation in Python, supporting multiple backends.

Keys
----

A key (str) can look like:

- 0123456789abcdef...  (usually a long, hex-encoded hash value)
- Any other pure ASCII string without "/" or ".." or " ".


Namespaces
----------

To keep stuff apart, keys should get prefixed with a namespace, like:

- config/settings
- meta/0123456789abcdef...
- data/0123456789abcdef...

Please note:

1. you should always use namespaces.
2. nested namespaces like namespace1/namespace2/key are not supported.
3. the code could work without a namespace (namespace ""), but then you
   can't add another namespace later, because then you would have created
   nested namespaces.

Values
------

Values can be any arbitrary binary data (bytes).

Store Operations
----------------

The high-level Store API implementation transparently deals with nesting and
soft deletion, so the caller doesn't have to care much for that and the Backend
API can be much simpler:

- create/destroy: initialize or remove the whole store.
- list: flat list of the items in the given namespace (by default only not
  soft deleted items, optionally only soft deleted items).
- store: write a new item into the store (giving its key/value pair)
- load: read a value from the store (giving its key), partial loads giving
  offset and/or size are supported.
- info: get information about an item via its key (exists? size? ...)
- delete: immediately remove an item from the store (giving its key)
- move: implements rename, soft delete / undelete, move to current
  nesting level
- stats: api call counters, time spent in api methods, data volume/throughput
- latency/bandwidth emulator: can emulate higher latency (via BORGSTORE_LATENCY
  [us]) and lower bandwidth (via BORGSTORE_BANDWIDTH [bit/s]) than what is
  actually provided by the backend.

Automatic Nesting
-----------------

For the Store user, items have names like e.g.:

- namespace/0123456789abcdef...
- namespace/abcdef0123456789...

If there are very many items in the namespace, this could lead to scalability
issues in the backend, thus the Store implementation offers transparent
nesting, so that internally the Backend API will be called with
names like e.g.:

- namespace/01/23/56/0123456789abcdef...
- namespace/ab/cd/ef/abcdef0123456789...

The nesting depth can be configured from 0 (= no nesting) to N levels and
there can be different nesting configurations depending on the namespace.

The Store supports operating at different nesting levels in the same
namespace at the same time.

When using nesting depth > 0, the backends will assume that keys are hashes
(have hex digits) because some backends will want to pre-create the nesting
directories at backend initialization time to optimize for better performance
while using the backend.

Soft deletion
-------------

To soft delete an item (so its value could be still read or it could be
undeleted), the store just renames the item, appending ".del" to its name.

Undelete reverses this by removing the ".del" suffix from the name.

Some store operations have a boolean flag "deleted" to choose whether they
shall consider soft deleted items.

Backends
--------

The backend API is rather simple, one only needs to provide some very
basic operations.

Existing backends are listed below, more might come in future.

posixfs
~~~~~~~

Use storage on a local POSIX filesystem:

- URL: ``file:///absolute/path``
- it is the caller's task to create an absolute fs path from a relative one.
- namespaces: directories
- values: in key-named files

sftp
~~~~

Use storage on a sftp server:

- URL: ``sftp://user@server:port/relative/path`` (strongly recommended)

  For user's and admin's convenience, mapping the URL path to the server fs path
  depends on the server configuration (home directory, sshd/sftpd config, ...).
  Usually the path is relative to the user's home directory.
- URL: ``sftp://user@server:port//absolute/path``

  As this uses an absolute path, things are more difficult here:

  - user's config might break if server admin moves a user home to a new location.
  - users must know the full absolute path of space they have permission to use.
- namespaces: directories
- values: in key-named files

rclone
~~~~~~

Use storage on any of the many cloud providers `rclone <https://rclone.org/>`_ supports:

- URL: ``rclone:remote:path``, we just prefix "rclone:" and give all to the right
  of that to rclone, see: https://rclone.org/docs/#syntax-of-remote-paths
- implementation of this primarily depends on the specific remote.
- rclone binary path can be set via the environment variable RCLONE_BINARY (default value: "rclone")


s3
~~

Use storage on an S3-compliant cloud service:

- URL: ``(s3|b2):[profile|(access_key_id:access_key_secret)@][schema://hostname[:port]]/bucket/path``

  The underlying backend is based on ``boto3``, so all standard boto3 authentication methods are supported:
  
  - provide a named profile (from your boto3 config),
  - include access key ID and secret in the URL,
  - or use default credentials (e.g., environment variables, IAM roles, etc.).

  See the `boto3 credentials documentation <https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html>`_ for more details.

  If you're connecting to **AWS S3**, the ``[schema://hostname[:port]]`` part is optional.
  Bucket and path are always required.

  .. note::

     There is a known issue with some S3-compatible services (e.g., **Backblaze B2**).
     If you encounter problems, try using ``b2:`` instead of ``s3:`` in the url.

- namespaces: directories
- values: in key-named files


Scalability
-----------

- Count of key/value pairs stored in a namespace: automatic nesting is
  provided for keys to address common scalability issues.
- Key size: there are no special provisions for extremely long keys (like:
  more than backend limitations). Usually this is not a problem though.
- Value size: there are no special provisions for dealing with large value
  sizes (like: more than free memory, more than backend storage limitations,
  etc.). If one deals with very large values, one usually cuts them into
  chunks before storing them into the store.
- Partial loads improve performance by avoiding a full load if only a part
  of the value is needed (e.g. a header with metadata).

Installation
------------

Install without the ``sftp:`` or ``s3:`` backend::

    pip install borgstore
    pip install "borgstore[none]"  # same thing (simplifies automation)

Install with the ``sftp:`` backend (more dependencies)::

   pip install "borgstore[sftp]"

Install with the ``s3:`` backend (more dependencies)::

   pip install "borgstore[s3]"

Please note that ``rclone:`` also supports sftp or s3 remotes.

Want a demo?
------------

Run this to get instructions how to run the demo:

python3 -m borgstore

State of this project
---------------------

**API is still unstable and expected to change as development goes on.**

**As long as the API is unstable, there will be no data migration tools,
like e.g. for upgrading an existing store's data to a new release.**

There are tests and they succeed for the basic functionality, so some of the
stuff is already working well.

There might be missing features or optimization potential, feedback welcome!

There are a lot of possible, but still missing backends. If you want to create
and support one: pull requests are welcome.

Borg?
-----

Please note that this code is currently **not** used by the stable release of
BorgBackup (aka "borg"), but only by borg2 beta 10+ and master branch.

License
-------

BSD license.

