Skip to content

Latest commit

 

History

History
56 lines (45 loc) · 1.94 KB

README.md

File metadata and controls

56 lines (45 loc) · 1.94 KB

OntoTune

In this work, we propose an ontology-driven self-training framework called OntoTune, which aims to align LLMs with ontology through in-context learning, enabling the generation of responses guided by the ontology.

🔔 News

  • 2025-01 OntoTune is accepted by WWW 2025 !

🚀 How to start

git clone https://github.com/zjukg/OntoTune.git

The code of fine-tuning is constructed based on open-sourced repo LLaMA-Factory.

Dependencies

cd LLaMA-Factory
pip install -e ".[torch,metrics]"

Data Preparation

  1. The supervised instruction-tuned data generated by LLaMA3 8B for the LLM itself is placed in the link.
  2. Put the downloaded OntoTune_sft.json file under LLaMA-Factory/data/ directory.
  3. Evaluation datasets for hypernym discovery and medical question answering are in LLaMA-Factory/data/evaluation_HD and LLaMA-Factory/data/evaluation_QA, respectively.

Finetune LLaMA3

You need to add model_name_or_path parameter to yaml file。

cd LLaMA-Factory
llamafactory-cli train OntoTune_sft.yaml

🤝 Cite:

Please consider citing this paper if you find our work useful.


@inproceedings{OntoTune,
  author       = {Zhiqiang Liu and
                  Chengtao Gan and 
                  Junjie Wang and 
                  Yichi Zhang and
                  Zhongpu Bo and
                  Mengshu Sun and
                  Huajun Chen and
                  Wen Zhang},
  title        = {OntoTune: Ontology-Driven Self-training for Aligning Large Language Models},
  booktitle    = {{WWW}},
  year         = {2025}
}