You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've found your paper very helpful for my studies, but I'd like to perform some of these benchmarks myself. Out of the box, it seems like the only script provided only works with some absent principle files in plain text rather than the provided JSON, and the OpenAI library. It would be great to have some resources to perform these benchmarks to a variety of different models (such as the Phi models), of which could be implemented with a universal library like litellm, or another library as such.
The text was updated successfully, but these errors were encountered:
Hello!
I've found your paper very helpful for my studies, but I'd like to perform some of these benchmarks myself. Out of the box, it seems like the only script provided only works with some absent principle files in plain text rather than the provided JSON, and the OpenAI library. It would be great to have some resources to perform these benchmarks to a variety of different models (such as the Phi models), of which could be implemented with a universal library like litellm, or another library as such.
The text was updated successfully, but these errors were encountered: