wellecks commited on
Commit
b6e07aa
1 Parent(s): 9db14cd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: meta-llama/Meta-Llama-3-8B
4
+ ---
5
+ ## [miniCTX: Neural Theorem Proving with (Long-)Contexts]()
6
+ File-tuned context model based on [miniCTX: Neural Theorem Proving with
7
+ (Long-)Contexts](https://www.arxiv.org/abs/2408.03350).
8
+
9
+ - Base language model: Llama 3 8B
10
+ - Data: [ntp-mathlib-instruct-context](https://huggingface.co/datasets/l3lab/ntp-mathlib-instruct-context)
11
+
12
+ It is specifically finetuned for Lean 4 tactic prediction given proof states and optional file contexts.
13
+
14
+ #### Example input
15
+ ```
16
+ /- You are proving a theorem in Lean 4.
17
+ You are given the following information:
18
+ - The file contents up to the current tactic, inside [CTX]...[/CTX]
19
+ - The current proof state, inside [STATE]...[/STATE]
20
+
21
+ Your task is to generate the next tactic in the proof.
22
+ Put the next tactic inside [TAC]...[/TAC]
23
+ -/
24
+ [CTX]
25
+ import Mathlib.Data.Nat.Prime
26
+
27
+ theorem test_thm (m n : Nat) (h : m.Coprime n) : m.gcd n = 1 := by
28
+
29
+ [/CTX]
30
+ [STATE]
31
+ m n : ℕ
32
+ h : Nat.Coprime m n
33
+ ⊢ Nat.gcd m n = 1
34
+ [/STATE]
35
+ [TAC]
36
+ ```
37
+
38
+ #### Example output
39
+ ```
40
+ rw [Nat.Coprime] at h
41
+ [/TAC]
42
+ ```
43
+
44
+ #### Citation
45
+
46
+ Please cite:
47
+ ```
48
+ @misc{hu2024minictx,
49
+ author = {Jiewen Hu and Thomas Zhu and Sean Welleck},
50
+ title = {miniCTX: Neural Theorem Proving with (Long-)Contexts},
51
+ year = {2024},
52
+ eprint={2408.03350},
53
+ archivePrefix={arXiv},
54
+ }
55
+ ```