TensorBoard with several writers -> Neptune logs just from one writer?
Hi, I have several TensorBoard writers in one process, each created with tf.summary.create_file_writer(), but for simplicity assume I have just two: train and validation writers. Now, I'm logging scalars with the same names to these two writers, e.g. tf.summary.scalar('loss',…
503 Service Unavailable: Service unavailable
Hi! I get "Experiencing connection interruptions. Reestablishing communication with Neptune." regularly (Python client), I checked the error and it seems the issue is on your side: "503 Service Unavailable: Service unavailable"
Internal Server error
Hello, we have had a few issues of server error after the experiment has been running for a few hours It happens at quite random steps. And it is quite annoying especially when it happens after hours or days of an experiment start. Any idea what have caused this or how to…
Images not viualized in image logs after reset.
Hello, I just came across a potential bug when loading images in a experiment's image log. If the log already exists and I call the reset_log method from the API to clean up its content before sending new data, any image sent after the reset is not visualized, returning a 404…
Neptune always failed accidentally when running
Neptune always failed accidentally when running my code. The code will continue to run with the following warning, but the neptune is failed to record anything. It always failed a few hours after it starts running. 'Failed to send channel value: Received batch errors sending…
Parameters are not sorted properly
Parameters in "manage columns" and "group by" should be ordered by name A->Z but they are only to some extend. I have 120 parameters there and what I see is A->Z A->Z order (as if the parameters were coming from two sources, each sorted separately and then concatenated)
Experiments take 5 times more space than they should
I tested that by creating a new experiment with 100000 data points in a single metric. Assuming that we have int64 and float, this should take: 100000*(8+8)B=1.6MB but according to the Size column it takes 7.6MB. And even when we consider that to every datapoint you add a float…