Snorkel now supports latest PyTorch release 1.4.0
We've upgraded our dependencies to support torch>=1.2.0, which includes the latest stable version, 1.4.0. These changes are available in our latest release, snorkel==0.9.5. We're excited to see what you build with Snorkel and the latest deep learning libraries!
questions on snorkel-metal example
I am trying to use snorkel-metal for a hierarchical labeling problem (https://github.com/HazyResearch/metal/blob/master/tutorials/Multitask.ipynb ). I have a few questions on the parameters that need to be passed: 1. There are three tasks and overall 6 labels. The label matrices…
How can we observe last entry in Σ(doubt in June 2019 workshop slides)
https://www.dropbox.com/sh/ipxmm6twu4p2qo1/AACztdxm-GTWxOkA7PfX2ooaa/Day%201?dl=0&preview=04_Theory_Apps.pdf&subfolder_nav_tracking=1 In the slides mentioned here, on the slide "Solution Sketch: Using the covariance ", it is given that we can observe the last cell(bottom right)…
noise aware loss function for categorical cross entropy
For the discriminative model, the snorkel paper suggest using a noise aware loss function. I was going through this tutorial https://www.snorkel.org/use-cases/01-spam-tutorial which suggests working directly sklearns classifiers. 1. I guess it is okay to use default loss…
Snorkel source code that uses dependency graphs?
The current snorkel source code's label model does not seem to use the created c_tree in calculating the final labels and associated probabilities : https://github.com/snorkel-team/snorkel/blob/master/snorkel/labeling/model/label_model.py Would it be possible for you to release…
snorkel labeling pipeline steps in production
Snorkel does weak supervised learning. Suppose i have a label model and discriminative model trained on some data. I am wondering how a snorkel based labeling pipeline that runs daily looklike. Are there any downsides to doing fit on the fresh batch of data that comes…
LabelModel conflicts with labeling function outputs
I have a binary classification problem with ~10 labeling functions. I am seeing records where nine lfs abstain (-1), one fires negative (0) (not the same lf every time), but once I fit and apply LabelModel, the predicted label is positive (1). I.e. I'm getting a positive label…