taher30 commited on
Commit
0b325a7
1 Parent(s): 1a007cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -35,3 +35,20 @@ The original dataset is sourced from the Hugging Face Hub, specifically the [big
35
  Using the flexibility and efficiency of DuckDB, I processed the entire dataset without the need for heavy hardware. DuckDB's ability to handle large datasets efficiently allowed me to concatenate the markdown, code, and output for each notebook path into a single string, simulating the structure of a Jupyter notebook.
36
 
37
  The transformation was performed using the following DuckDB query:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  Using the flexibility and efficiency of DuckDB, I processed the entire dataset without the need for heavy hardware. DuckDB's ability to handle large datasets efficiently allowed me to concatenate the markdown, code, and output for each notebook path into a single string, simulating the structure of a Jupyter notebook.
36
 
37
  The transformation was performed using the following DuckDB query:
38
+
39
+ ```python
40
+ import duckdb
41
+
42
+ Connect to a new DuckDB database
43
+ new_db = duckdb.connect('merged_notebooks.db')
44
+
45
+ Query to concatenate markdown, code, and output
46
+ query = """
47
+ SELECT path,
48
+ STRING_AGG(CONCAT('###Markdown\n', markdown, '\n###Code\n', code, '\n###Output\n', output), '\n') AS concatenated_notebook
49
+ FROM read_parquet('jupyter-code-text-pairs/data/*.parquet')
50
+ GROUP BY path
51
+ """
52
+
53
+ Execute the query and create a new table
54
+ new_db.execute(f"CREATE TABLE concatenated_notebooks AS {query}")