Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Is It Possible to Share the Code how to Do the evaluation #169

Open
ZhichaoWang970201 opened this issue Jul 10, 2024 · 3 comments
Assignees
Labels
question Further information is requested

Comments

@ZhichaoWang970201
Copy link

Describe the issue

I am wondering if you can share the evaluation code for "LongLLMLingua mitigates the 'lost in the middle' issue in LLMs, enhancing long-context information processing."

@ZhichaoWang970201 ZhichaoWang970201 added the question Further information is requested label Jul 10, 2024
@iofu728
Copy link
Contributor

iofu728 commented Jul 15, 2024

Hi @ZhichaoWang970201, thanks for your support.

You can follow the instructions in longchat-13b-16k to run the NaturalQA benchmark.

@iofu728 iofu728 self-assigned this Jul 15, 2024
@ZhichaoWang970201
Copy link
Author

Screenshot 2024-07-19 at 8 28 18 PM
Thank you for your sharing.

In addition, the data used for generating answer is already very short. Is this what the author used for prompt compressing?
If not, will the author add prompt to ask LLM to copy exact text from the input prompt? The evaluation file asks for exactly matching.

Thank you.

@ZhichaoWang970201
Copy link
Author

The problem is solved when reading more of the lost in the middle paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants