menu

EdgeDB

Bringing state-of-the-art database technology to the masses.

Channels
Team
Posts
Chat
Members
Info

February 20, 2020 at 6:22am
Hi! I have a quick question about cross-constraints: are they possible and how? :)
  • reply
  • like
https://edgedb.com/docs/edgeql/ddl/constraints#create-constraint mentions "Currently EdgeDB only supports constraint expressions on scalar types and properties." so I'd guess not
  • reply
  • like

February 20, 2020 at 1:40pm
Maybe a function inside a restraint? I don't know if that works (yet) or how to declare it in schematically valid way
  • reply
  • like
there's "on expression" for a more flexible constraint. But I'm not sure if we can access other properties (so, objecttype level) or not (so, property level).
  • reply
  • like
or : Do you know anything about this constraints question above? And if it's possible to pull next values from a sequence object? I couldn't find anything about that in the doc.
  • reply
  • like

February 21, 2020 at 8:18pm
Constraints across multiple links in an object are not supported. Support for constraints across multiple singleton properties in an object type will be added soon (https://github.com/edgedb/edgedb/issues/1164).
like-fill
1
  • reply
  • like
And if it's possible to pull next values from a sequence object? I couldn't find anything about that in the doc.
Sequences are incremented implicitly when you're inserting an object with a sequence property. Fetching next values directly is not implemented yet, but should probably be a straightforward addition.
like-fill
1
  • reply
  • like

February 25, 2020 at 10:04am
Thank you for your answers :-). I have (again) another question :-). I have to add a few millions of items into a database (but I'm doing some test with just 60k of them for now). I use the Python client. Should I use the FOR statement ( https://edgedb.com/docs/edgeql/statements/for ) and do everything at once? Or 60k different inserts (with edgedb.create_async_pool())? Or somewhere in between (batches)? Or is there another solution that I have missed? Thanks in advance!
  • reply
  • like
module default {
type Index {
required property name -> str {
constraint exclusive;
}
}
type History {
required link index -> Index;
required property timestamp -> datetime;
required property value -> float64;
}
}
and...
INSERT Index {
name := 'idx1'
};
FOR x IN {
(timestamp := <datetime>'2014-10-31T00:02:41+00', value := 9145.5),
(timestamp := <datetime>'2014-10-31T00:02:53+00', value := 9145.5),
(timestamp := <datetime>'2014-10-31T00:02:59+00', value := 9146.6),
}
UNION (INSERT History {
index := (SELECT Index FILTER Index.name = 'idx1'),
timestamp := x.timestamp,
value := x.value
});
there's also the issue that I'm doing nested queries. Which seems pretty bad. Not sure how I can avoid that.
Edited
  • reply
  • like
Okay, I think I found a way to remove the nested query...
WITH idx1 := (SELECT Index FILTER Index.name = 'idx1'),
FOR x IN {
(timestamp := <datetime>'2014-10-31T00:02:41+00', value := 9145.5),
(timestamp := <datetime>'2014-10-31T00:02:53+00', value := 9145.5),
(timestamp := <datetime>'2014-10-31T00:02:59+00', value := 9146.6),
}
UNION (INSERT History {
index := idx1,
timestamp := x.timestamp,
value := x.value
});
  • reply
  • like
And probably, it's more efficient to use tuples instead of named tuples (as the type gets recreated every time, as it's not declared before)
(timestamp := <datetime>'2014-10-31T00:02:41+00', value := 9145.5),
--...
timestamp := x.timestamp,
value := x.value
becomes
(<datetime>'2014-10-31T00:02:41+00', 9145.5),
--...
timestamp := x.0,
value := x.1
  • reply
  • like

February 25, 2020 at 6:50pm
The FOR statement is probably the most efficient way if you can spare the RAM to buffer the whole dataset (2x if you run the client and the server on the same host). Use array_unpack in FOR to unpack the array of tuples.
And probably, it's more efficient to use tuples instead of named tuples (as the type gets recreated every time, as it's not declared before)
There is no difference. EdgeQL queries, like SQL are declarations, so all type derivation and optimization happens once, when the query is compiled/prepared.
Edited
like-fill
1
  • reply
  • like

March 25, 2020 at 9:06pm
Hi, I have a question about how the implicit limiting works in the repl. It seems to send the implicit limit along with the query message in the headers to the server, but the server only appears to apply the limit to the 'top level' results set and the 'inner' sets are truncated in the repl. I was assuming that the server would do all the limiting. So is this correct and this is intentional, or is it a bug?
  • reply
  • like

March 27, 2020 at 1:02am
Implicit limits are injected in all SELECT statements (including implicit ones in the nested sets), but only if there's no explicit limit. Are you seeing a different behavior?
  • reply
  • like

March 27, 2020 at 1:01pm
Yes, I was trying to implement implicit limits in the JS driver for a GUI client I'm making, and noticed it was only limiting the top level results, where in the cli repl it limits the nested sets as well. But checking with wireshark it seems the server sends all the data for the nested sets and it is truncated by the repl renderer. So I wasn't sure if I was misunderstanding how the implicit limiting worked, and the server was meant to only limit the top level set, and the nested set limiting was just being done in the repl for display reasons. Though you're saying that the server should be limiting all the sets, so I think there is a bug somewhere.
  • reply
  • like
Repl actually asks for limit +1 entries and truncates the last entry. This is to disambiguate cases where you have exactly limit entries and when you have more than limit entries. We have a test for this, but if you're seeing a different behavior on a certain query, please open an issue!
  • reply
  • like
It returns the whole set if you remove the ORDER BY .n on line 2551
  • reply
  • like
  • reply
  • like
Ha. That's a bug. Luckily, the fix is rather simple: #1311. Thanks for noticing this!
  • reply
  • like