oh, interesting idea, but no, our approach has been to add aggregate
calculators to each transaction like
- with(move things around)
- with(recalculate relevant aggregates)
- squash everything
- transact it (this includes redo everything if cases fail etc)
this is a somewhat different approach than what Tim Ewald promotes, his
approach (explicit "ok latest transaction") is probably better for most
applications - its invaluable to be able to trace the path leading to the
current result (failed transactions Àr simpelt ignored in the queries -
really clever!)
/Linus
Den 19 maj 2017 10:40 skrev "Val Waeselynck" <***@gmail.com>:
Thanks Linus,
Just so I understand, the strategy here is to add additional datoms to the
db value using db.with() just before each read?
If so, what determines which of these transactions will be db.with'ed - is
it static analysis of the pull pattern?
Val
Post by Linus EricssonHello Val,
we made a non-complete implementation of a pull-expression "filter" that
picks out all datoms in a pull expression by walking a pull expression,
given a top-node.
We use that to feed a client side datascript database from Datomic. This
actually works as sort of an ACL for our web frontend, but puts some
demands on the data model, for instance we avoid using the :db/id-index
because we are afraid of collisions and other things but I don't know if we
really should be so worried about it. Anyway.
The diffing is currently just a clojure.diff/diff of the current and
previous set of datoms,which are sent as transaction-data over websocket to
the client side database wire. Works quite well although it is not as
general as one could like.
I have thought quite much on your idea with derived attributes and
getters, but we instead created a logic where we can "squash" several
with-transactions instead, which makes it possible to add aggregates for
data in sync with the transactions (but depends on that we don't have
CAS-like conflicts in the often updated attributes). Explicit aggregates
has the nice property that it has very predicable query performance, but
could of course be too heavy for many use cases.
/Linus
Post by Val WaeselynckI'm about to start coding a library which reimplements the Entity and
Pull APIs to support derived data (derived attributes & getters).
Before I dive in, is anyone working on this already?
Cheers,
Val
--
You received this message because you are subscribed to the Google Groups
"Datomic" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups
"Datomic" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to datomic+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Datomic" group.
To unsubscribe from this group and stop receiving emails from it, send an email to datomic+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.