-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prompting warnings in widget response when the inference doesn't work #96
Comments
I think this was exactly the intent by @adrinjalali because there is no guarantee that predictions are correct if versions don't match. |
Does predictions change according to versions of sklearn? 😅 Do the implementation changes? |
Without knowing all the details, I think for the vast majority of cases, predictions won't change. However, there is a small chance that they do change, possibly leading to completely nonsensical output. Therefore, it's better to be safe and not return the prediction if correctness cannot be guaranteed. |
I think the solution is already there, The only thing which is not implemented is that the widget is currently not showing the returned warnings, but the api-inference is returning them. So the fix is on the widget side. This is the corresponding issue on the widget side: huggingface/huggingface.js#318 Regarding changing predictions, sometimes the old model wouldn't even load on the new sklearn and other way around, and our tests also include such a case. The |
@adrinjalali But if the predictions are returned why would we still need it? |
we return a non-200 return code, with the warnings attached, and the user can decide if they want to use it or not. Some warnings can be ignored. |
cc @mishig25 on this discussion on the widget side |
@adrinjalali @mishig25 Can we make that response parse-able by widget and print warnings below? Would help a lot of people willing to debug their widgets out (I get a lot of messages so it's a common thing I'd say). |
Have we gotten lots of messages related to warnings? Usually messages with questions are more related to errors, which we do show in the widget most of the time. |
I'm not sure what you mean by parsable here @merveenoyan . They are json, so they can easily be parsed. @osanseviero from the sklearn side we raise quite a few warnings, and it's quite useful for users to see them in the widgets. |
I can't click JSON output here, is it possible we put the warnings somewhere that it's not supposed to be? That's what I mean with parse-able. @adrinjalali |
Ah I see, if you call the API directly (using |
@adrinjalali yes I know that, I'm talking for the widget itself because of this reason. I feel like (not sure) we're putting the warning in wrong place that it doesn't show over there. |
No we're not, you might want to re-read this one: #96 (comment) 😁 |
@adrinjalali I can fix it after the text widget PR is done. (which I am a bit stuck with 503 errors) |
cc @mishig25 @beurkinger on internal discussion https://huggingface.slack.com/archives/C0314PXQC3W/p1664296775532499 Currently, most of the
And upon closer lookup of the Network tab, I see
This is not a great UX as it's confusing for users. Even if we show the text of the warning in the widget, this will not be super useful for users (not model uploaders). Having such a strict requirement and breaking the widget for anyone without the same pinned version will lead to the widget not working for most users in the long run, which is undesirable. I don't think we can expect all people to use the same Should we consider showing the predictions, even if there is a mismatch, and expose the warnings below the widget? |
Related issue shared by @BenjaminBossan huggingface/huggingface.js#318 |
Just to note, this part of the error message, |
These are some good points. I wonder if we should treat warnings caused by sklearn version mismatch differently, as they are false positives most of the time. Not sure how best to implement this, as we would still want to display the information that something might be wrong, but these warnings are certainly in a different category from, say, warnings about division by zero. |
Right now, the response from https://huggingface.co/julien-c/wine-quality is:
The question is: Option1 or Option2 ? |
I was wondering if we can change the logic here: api-inference-community/docker_images/sklearn/app/pipelines/tabular_classification.py Lines 70 to 80 in 778fe84
Right now, if there are any warnings, we just treat it as an error. Perhaps we can make an exception for warnings caused by version mismatch, return a successful response, and add an extra field with the information that there was a version mismatch. |
I think giving a successful response + adding some warning in an extra field, and then having the widget show the successful widget/table, but with a warning below, makes lots of sense to me. |
I second to that. Treating response with |
That's not true. All
This would lead to users getting wrong results and relying on them, which would be very bad. The output is not a "successful" output in this case. |
Another question I had is: right now, the response from https://huggingface.co/julien-c/wine-quality is:
Why there are no warnings in the response at the moment? |
I personally don't have a strong opinion on one of the two options:
IIRC @Narsil was very much in favor of the second option. |
That's something which is worth fixing. |
@mishig25 Personally I would go for option 1. If we don't the most tabular classification widgets users can try on the website will be broken, which is kind of ridiculous. No point in punishing people who just want to see how widget works / what kind of result they can expect. I don't know how server responses are shaped, but it would be nice to get the type of the error, so we can give a more useful message (or to return a better error message on the server side). |
From my side, I am happy to implement the widget either for Option1 or Option2. However, there needs to be updates for both options:
I will submit a PR once one of the the options are decided & the necessary api-inference changes are made 👍 |
Continuing my previous message: simply dumping the response as JSON when we get an error is not very elegant or useful. I think it would be better to have a concise and to-the-point error/warning message, and give the user the opportunity to see the whole response using the "JSON output" button (which is currently deactivated when getting an error). |
Okay, there seems to be consensus around option 1. I will work on that. To be sure, we expect the response to be something like this:
? |
@BenjaminBossan yes 👍 |
This is very irritating (see above issue I linked) Below you will see prettier version of errors + warnings. I say we iterate over each and raise them separately. On top of this, during
|
The warnings should now be included in the response, @BenjaminBossan fixed it in #114 We can now show them on the widget side. |
@merveenoyan is this done? I think the warnings are returned, but not displayed on the widget side, and that still needs to be done? |
friendly ping @merveenoyan Reopening this issue in the meantime |
Let me know i fI can help. |
If I'm not mistaken, @mishig25 can add them now to the widget. |
Hello,
Any warning gets appended in scikit-learn pipelines, even if the inference is successful. I suggest we should check if the prediction and the response looks good and if not, we should return warnings. Otherwise it gets prepended on top of the response and will break the widget. (what I observed was version mismatch, which doesn't happen in the production, I know, but I don't think version mismatch should concern the person if the predictions are returned well or any warning on warning level and not error level)
(This is something I observed for text classification pipeline because I repurposed code from tabular pipelines, let me know if this isn't the case.) Also feel free to ignore this issue if this doesn't make any sense. I think the below code should be refactored.
WDYT @adrinjalali @BenjaminBossan
The text was updated successfully, but these errors were encountered: