--- language: - "ja" tags: - "japanese" - "pos" - "dependency-parsing" base_model: abeja/gpt2-large-japanese datasets: - "universal_dependencies" license: "mit" pipeline_tag: "token-classification" widget: - text: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている" --- # abeja-gpt2-large-japanese-ud-causal ## Model Description This is a GPT-2 model pretrained for POS-tagging and dependency-parsing, derived from [gpt2-large-japanese-upos](https://huggingface.co/abeja/gpt2-large-japanese) refined for [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). ## How to Use ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/abeja-gpt2-large-japanese-ud-causal",trust_remote_code=True) print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) ```