metadata
language:
- lzh
tags:
- classical chinese
- literary chinese
- ancient chinese
- token-classification
- pos
- dependency-parsing
base_model: KoichiYasuoka/Xunzi-Qwen2-1.5B-upos
datasets:
- universal_dependencies
license: apache-2.0
pipeline_tag: token-classification
widget:
- text: 子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎
Xunzi-Qwen2-1.5B-ud-causal
Model Description
This is a LLaMA model pretrained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from Xunzi-Qwen2-1.5B-upos and UD_Classical_Chinese-Kyoto.
How to Use
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/Xunzi-Qwen2-1.5B-ud-causal",trust_remote_code=True)
print(nlp("不入虎穴不得虎子"))