Share your thoughts about how to improve Neptune.
Hi, here is Kamil (Data Scientist and Product Owner), feel free to share your thoughts about how we can improve Neptune. Ideas, feature requests, documentation missing pieces - all feedback is welcome :)
Server connection lost and missing parameters
Recently I've been using Neptune quite a lot and I noticed some things which definitely could be improved: • server connection is lost quite often and python client tries to connect to Neptune server, exponentially prolongating the reconnection time after it fails to connect and…
Experiment Comparison features
Some features that would be nice to have: 1. When comparing multiple experiments it would be really useful being able to select what legend to use. At the moment the default and only legend is the Short ID, which to be honest in many cases is uninformative and makes it…
Some thoughts about possible new features
• neptune is tracking git commits but it's not tracking the remote repository itself which would be quite a useful feature (to clone code from a given repo and given commit) • NQL does not support comparing one column to another (i.e. 'a' > 'b') and does not recognize columns…
Parameters columns missing?
I am pretty sure that something wrong happened to Parameters. No properties can be now selected to be shown as columns (in "Manage columns" I see "Parameters: 0") Everything was fine several days ago. Of course, my experiments have Parameters (I can see them in Details)
Neptune Session and invalid API key
I would like to use Neptune sessions for my experiments. When I do that I get information that I should set the backend or, for the default one, use Session.with_default_backend(...). But when I update my code with that statement I got two errors: Error 1 and (related one as it…
"Latest time" column
A small convenient feature would be to have a "latest time" column showing the latest y/time value among all metrics (which usually mean "epoch" or "iteration"). Of course, I can "simulate" it by adding an artificial "time" metric (if I can think about it in advance). This would…
Management of debug runs
When making many successive debug runs on a new ML pipeline, I don't want to repeatedly log to Neptune. Is the proper way to handle this case to set during debug/dev runs? Or is there a better way to tell Neptune to not log an experiment during a run?
Cannot compare experiments from different pages
1. I can only show 50 experiments per page (I consider that as an unnecessary limitation) 2. I cannot select experiments from different pages (previous selection disappears after page change) to comparison 3. (Minor) Upon selection the first experiment, the entire table moves…