menu

neptune-community

A place where neptune.ai users and developers come together to make things work

Channels
Chat
view-forward
# All channels
view-forward
# General
view-forward
# Blog
view-forward
# Bugs
view-forward
# Feature requests
view-forward
# Random
view-forward
# What's new
view-forward
Team

Share your thoughts about how to improve Neptune.

July 17, 2019 at 3:01pm

Share your thoughts about how to improve Neptune.

July 17, 2019 at 3:01pm
Hi, here is Kamil (Data Scientist and Product Owner), feel free to share your thoughts about how we can improve Neptune. Ideas, feature requests, documentation missing pieces - all feedback is welcome :)
Show previous messages

August 1, 2019 at 1:40pm
I think a lot of people work in an environment where computational recources are shared and interfaced through a job system like slurm. To keep track of experiments in such a system it is vital to be able to easily continue experiments (say in case my job was suspended to give way for a higher priority job). I really like the dashboard on neptune.ml currently, but this could be a deal breaker for me in the future. So better support for continuing experiments: logging sysout, std err, system info and updating the status of resumed experiments would be a big improvement for me (and I think for many other users as well).
like-fill
2
  • reply
  • like
I lost connection for some minutes in the middle of the experiment. Is there any solution to prevent that?
Hi , logging metrics, text and images is pretty robust to connection interruptions, because it works asynchronously. It means that your job failed on some other interaction with Neptune like set_property or add_tag or on something unrelated to Neptune. Fortunately we are adding improvement to our Python API right now. It will work in a way that in case logging some info failed (connection issues for example), Neptune will retry doing this action for up to 30 minutes. Thanks to this we will be pretty robust to network interruptions. Improvement going live around Tuesday, so do not forget to update your neptune-client lib later next week :)
like-fill
1
  • reply
  • like

August 2, 2019 at 4:06pm
Hi, Kamil, I am missing to have a diff for source code or parameters (I have long dict with many dicts inside of dicts) between experiments to quickly see what actually changed it. Is it something already implemented?
  • reply
  • like

August 4, 2019 at 7:48pm
I think a lot of people work in an environment where computational recources are shared and interfaced through a job system like slurm. To keep track of experiments in such a system it is vital to be able to easily continue experiments (say in case my job was suspended to give way for a higher priority job). I really like the dashboard on neptune.ml currently, but this could be a deal breaker for me in the future. So better support for continuing experiments: logging sysout, std err, system info and updating the status of resumed experiments would be a big improvement for me (and I think for many other users as well).
Hi this topic is so interesting and so broad - I created another thread for it here:

Neptune in shared environment and computational resources

thumbsup
1
message-simple
6
. Let's continue there
Edited
  • reply
  • like
Hi, Kamil, I am missing to have a diff for source code or parameters (I have long dict with many dicts inside of dicts) between experiments to quickly see what actually changed it. Is it something already implemented?
Hi I like this one, so created new thread for it here:

diff for source code or parameters

thumbsup
0
message-simple
15
Let's continue there
  • reply
  • like
Quick announcement: please create new post in "Feature requests" channel, when you are contributing new idea - it help a lot in keeping things organized. Thanks! :)
  • reply
  • like

August 12, 2019 at 10:45am
I lost connection for some minutes in the middle of the experiment. Is there any solution to prevent that?
This issue should occur much less often since version 0.3.5 of neptune-client. Please upgrade your library and let us know if it helped. :)
like-fill
1
  • reply
  • like

August 26, 2019 at 9:39am
It would be great if the selection of experiments remains after going to the next page in the workspace. (for example this would be handy for comparing experiments)
  • reply
  • like
Hi thanks for that. By "next page in the workspace" you mean this: Going from experiments to comparison or to single experiment, and then back to all experiments. Is that correct?
  • reply
  • like
Hi thanks for that. By "next page in the workspace" you mean this: Going from experiments to comparison or to single experiment, and then back to all experiments. Is that correct?
Heey Kamil, Thanks for your quick response! What i mean is: when i select an experiment and i go to the next page (because i have more than 50 experiments), the selected experiment is not considered as selected anymore. It would be great if it was still considered as selected for comparing experiments. For example if i have 70 experiments and i want to compare experiment 1 and 70 (which are on different pages).
  • reply
  • like

August 27, 2019 at 7:20am
Hey , Now I can see your point thanks! I think that the best way to achieve this is by filtering experiments. Is there some filtering option missing? We wanted to make sure that users can filter experiments in a way that they fit single page (max 50 exps). Cheers
Edited
  • reply
  • like

August 28, 2019 at 8:37am
Yes that is indeed a workaround thanks! The tag and filtering options are a great feature! Still i think it would be handy if you could compare experiments that are not on a single page. But with the workaround with the tag and filtering options it takes only a small effort to make it work. Thanks!
  • reply
  • like
Hi - happy to hear that it helps. I'll take a closer look how we can introduce comparing across experiment pages.
  • reply
  • like

September 21, 2019 at 8:42pm
Do I miss something or currently there is no way to filter your experiments basing on parameter values (like dataset hash or learning rate) or system columns? I am missing that...
like-fill
1
  • reply
  • like

September 23, 2019 at 12:42pm
Hi thanks for this suggestion.
  • reply
  • like
So right now you can solve similar problem by sorting column of interest. In this way you can visually inspect the results. So pick your column in "manage columns" panel (right side), then sort by this value. Check screenshot below.
  • reply
  • like
  • reply
  • like
Edited
  • reply
  • like
Sorting is realized by clicking on arrows next to the column name.
  • reply
  • like
Nevertheless I noted your feature request. It is very accurate feedback - thanks for it!
Edited
  • reply
  • like
Yeah, but this is a pain in general. You then need to manually select individual experiments to compare them. Not good enough.
  • reply
  • like
I agree - we will improve.
  • reply
  • like

October 10, 2019 at 12:04am
Writing a QuickStart Guide on JupyterLab and would love to include NeptuneML. Will post questions here. We wouldn’t go super in-depth, but we would want to get any language and technical details right.
  • reply
  • like
Would love suggestions, too.
  • reply
  • like
Show more messages