suphanatwong commited on
Commit
afc4f1f
1 Parent(s): abd081a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -2
README.md CHANGED
@@ -1,4 +1,3 @@
1
- content = ""
2
  ---
3
  license: apache-2.0
4
  language:
@@ -9,4 +8,75 @@ metrics:
9
  datasets:
10
  - AIAT/The_Scamper-train
11
  pipeline_tag: table-question-answering
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  language:
 
8
  datasets:
9
  - AIAT/The_Scamper-train
10
  pipeline_tag: table-question-answering
11
+
12
+
13
+ ---
14
+ # Model Card for Model ID
15
+
16
+ <!-- Provide a quick summary of what the model is/does. -->
17
+
18
+ This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
19
+
20
+ ## Model Details
21
+
22
+ ### Model Description
23
+
24
+ <!-- Provide a longer summary of what this model is. -->
25
+
26
+
27
+
28
+ - **Developed by:** The Scamper
29
+ - **Funded by [optional]:** [More Information Needed]
30
+ - **Shared by [optional]:** [More Information Needed]
31
+ - **Model type:** Transformer
32
+ - **Language(s) (NLP):** Thai, English
33
+ - **License:** [More Information Needed]
34
+ - **Finetuned from model [optional]:** OpenThaiGPT-1.0.0 70B (https://huggingface.co/openthaigpt/openthaigpt-1.0.0-70b-chat)
35
+
36
+
37
+ ## Uses
38
+
39
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
+
41
+ The Tubular Question Answering Large Language Model is based on OpenThaiGPT and fine-tuned for converting natural language questions into SQL queries. It learns to map the nuances of Thai language to SQL structures, enabling efficient retrieval of information from databases.
42
+
43
+ model2_path ="AIAT/The_Scamper-opt70bqt"
44
+ tokenizer = AutoTokenizer.from_pretrained(model2_path, padding_side="right",use_fast=False)
45
+ model = AutoModelForCausalLM.from_pretrained(model2_path,
46
+ device_map="auto")
47
+
48
+
49
+ ### Recommendations
50
+
51
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
52
+
53
+
54
+
55
+ ## How to Get Started with the Model
56
+
57
+ Use the code below to get started with the model.
58
+
59
+ [More Information Needed]
60
+
61
+ ## Training Details
62
+
63
+ ### Training Data
64
+
65
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
66
+
67
+ [More Information Needed]
68
+
69
+ ### Training Procedure
70
+
71
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
72
+
73
+ #### Preprocessing [optional]
74
+
75
+ [More Information Needed]
76
+
77
+
78
+ #### Training Hyperparameters
79
+
80
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
81
+
82
+