in-memory db
  • C 94%
  • Makefile 3.6%
  • Python 2%
  • Shell 0.4%
Find a file
2024-04-17 14:07:34 -03:00
c_client prepare for config 2024-04-16 18:29:14 -03:00
cli more many structure and renaming 2024-04-15 22:29:14 -03:00
db2_mempool prepare for config 2024-04-16 18:29:14 -03:00
db2_time more many structure and renaming 2024-04-15 22:29:14 -03:00
db2_types removed timeseries start and and get range 2024-04-17 14:07:34 -03:00
kv more many structure and renaming 2024-04-15 22:29:14 -03:00
server removed timeseries start and and get range 2024-04-17 14:07:34 -03:00
timeseries removed timeseries start and and get range 2024-04-17 14:07:34 -03:00
utilities structure changes; singular makefile 2024-03-06 23:42:28 -03:00
.gitignore minor changes in many places 2024-04-08 22:45:41 -03:00
benchmark.py minor changes in many places 2024-04-08 22:45:41 -03:00
db2_table.c db_value_t -> db2_value_t 2024-04-08 22:50:19 -03:00
makefile prepare for config 2024-04-16 18:29:14 -03:00
README.md updated README 2024-04-09 12:15:47 -03:00
test.sh node client 2023-11-10 12:46:08 -03:00

Db2 - simple, local database

build

  • server - make server
  • cli - make cli

run

server

./bin/db2.out

client

./bin/cli.out [command] {arg1 .. argk}

commands

key/value

  • get [key]
  • set [key] [value]
  • delete [key]
  • filter []

timeseries

  • create [name]
  • add [series_descriptor] [value]
  • get_range [start] [end]
  • start [series_descriptor]
  • end [series_descriptor]

Things to Tackle

  • memory - currently mempool is a wrapper around malloc and free.
    • consider kv memory, timeseries, and the future (tables, etc.)
    • the idea is to not have to use syscalls for memory
  • query on kv -
    • how to store and handle filter results?
    • how to provide easy use, extendability and not too much work?
    • pipe queries ? meaning support this - filter key startswith 'user' && value.email !has '@'
    • so that the second part only looks at the results of the first, not the whole kv
    • when value is an object, how to access its fields for querying?
  • cache
    • what it takes to turn kv into a cache
  • config
    • only what must be decided before build
    • total memory
    • max kv entries (optional, if you don't intend to use kv)
    • limit kv key / value size
    • max number of time-series
    • max entries per time-series
    • if and how to compress time-series
    • cache defualt ttl
  • management cli -
    • add / remove services like - ./db2_mgmt start kv
    • check status for services, memory etc.
    • flag / block connections
    • add / remove nodes (far future)
  • time-series compression
    • ask Mr. GPT - "what are some popular strategies to compress a time-series?"
    • implement 1 or 2, make clear interface to add more
    • how to store and access these compressed series?
  • ingest in bulk
  • support http/s connections