Skip to content

Commit

Permalink
WIP: Prepare nbdev notebooks
Browse files Browse the repository at this point in the history
  • Loading branch information
binkjakub committed Jun 12, 2024
1 parent dea9a2f commit afb4e4f
Show file tree
Hide file tree
Showing 10 changed files with 192 additions and 46 deletions.
8 changes: 8 additions & 0 deletions nbs/Data/01_Dataset_Description.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,14 @@
"sns.set_theme(\"notebook\")"
]
},
{
"cell_type": "markdown",
"id": "8f5ffccf",
"metadata": {},
"source": [
"# Raw & Instruct Datasets Analyses"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down
7 changes: 7 additions & 0 deletions nbs/Data/02_Analyse_sft.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,13 @@
"warnings.filterwarnings('ignore', message=\"To copy construct from a tensor, it is recommended to use\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SFT results inspection"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down
7 changes: 7 additions & 0 deletions nbs/Data/03_Graph_Dataset_Description.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,13 @@
"sns.set_theme(\"notebook\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Graph dataset analysis"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down
8 changes: 8 additions & 0 deletions nbs/Data/04_Graph_Analysis.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,14 @@
"sns.set_theme(\"notebook\")"
]
},
{
"cell_type": "markdown",
"id": "dcd46ebf",
"metadata": {},
"source": [
"# Local subgraphs analysis"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,22 +35,6 @@
"raw_ds = pl.scan_parquet(source=\"../../data/datasets/pl/raw/*\")"
]
},
{
"cell_type": "markdown",
"id": "bac42f58ea3c3d96",
"metadata": {},
"source": [
"---\n",
"language: {{language}}\n",
"multilinguality: {{multilinguality}}\n",
"language_creators: {{language_creators}}\n",
"size_categories: {{size_categories}}\n",
"source_datasets: {{source_datasets}}\n",
"pretty_name: {{pretty_name}}\n",
"tags: {{tags}}\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "350cb2d131ba5aeb",
Expand Down Expand Up @@ -86,10 +70,10 @@
"\n",
"## Dataset Description\n",
"\n",
"- **Homepage: TBA**\n",
"- **Repository: [github](https://github.com/pwr-ai/JuDDGES)**\n",
"- **Paper: TBA**\n",
"- **Point of Contact: [email protected]; [email protected]; [email protected]**\n",
"* **Homepage: TBA**\n",
"* **Repository: [github](https://github.com/pwr-ai/JuDDGES)**\n",
"* **Paper: TBA**\n",
"* **Point of Contact: [email protected]; [email protected]; [email protected]**\n",
"\n",
"### Dataset Summary\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,23 +38,6 @@
"ds = load_dataset(\"JuDDGES/pl-court-instruct\") "
]
},
{
"cell_type": "markdown",
"id": "bac42f58ea3c3d96",
"metadata": {},
"source": [
"---\n",
"language: {{language}}\n",
"multilinguality: {{multilinguality}}\n",
"language_creators: {{language_creators}}\n",
"size_categories: {{size_categories}}\n",
"source_datasets: {{source_datasets}}\n",
"pretty_name: {{pretty_name}}\n",
"tags: {{tags}}\n",
"task_categories: {{task_categories}}\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "350cb2d131ba5aeb",
Expand Down Expand Up @@ -90,19 +73,19 @@
"\n",
"## Dataset Description\n",
"\n",
"- **Homepage: TBA**\n",
"- **Repository: [github](https://github.com/pwr-ai/JuDDGES)**\n",
"- **Paper: TBA**\n",
"- **Point of Contact: [email protected]; [email protected]; [email protected]**\n",
"* **Homepage: TBA**\n",
"* **Repository: [github](https://github.com/pwr-ai/JuDDGES)**\n",
"* **Paper: TBA**\n",
"* **Point of Contact: [email protected]; [email protected]; [email protected]**\n",
"\n",
"### Dataset Summary\n",
"\n",
"The dataset consists of Polish Court judgements available at https://orzeczenia.ms.gov.pl/, containing full content of the judgements along with metadata sourced from official API and extracted from the judgement contents. This dataset is designed for fine-tuning large language models (LLMs) for information extraction tasks and is formatted as instructions. For raw dataset see [`JuDDGES/pl-court-raw`](https://huggingface.co/datasets/JuDDGES/pl-court-raw). For graph dataset see [`JuDDGES/pl-court-graph`](https://huggingface.co/datasets/JuDDGES/pl-court-graph).\n",
"\n",
"### Supported Tasks and Leaderboards\n",
"\n",
"- `information-extraction`: The dataset can be used for information extraction tasks.\n",
"- `text-generation`: The dataset can be used for text generation tasks, as the dataset is formatted as instructions.\n",
"* `information-extraction`: The dataset can be used for information extraction tasks.\n",
"* `text-generation`: The dataset can be used for text generation tasks, as the dataset is formatted as instructions.\n",
"\n",
"### Languages\n",
"\n",
Expand All @@ -124,7 +107,9 @@
"id": "3f161970acf83cfa",
"metadata": {},
"outputs": [],
"source": "display(ds[\"train\"][0])"
"source": [
"display(ds[\"train\"][0])"
]
},
{
"cell_type": "markdown",
Expand Down
128 changes: 128 additions & 0 deletions nbs/Dataset Cards/03_Graph_Description.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
# Polish Court Judgments Graph

## Dataset description
We introduce a graph dataset of Polish Court Judgments. This dataset is primarily based on the [`JuDDGES/pl-court-raw`](https://huggingface.co/datasets/JuDDGES/pl-court-raw). The dataset consists of nodes representing either judgments or legal bases, and edges connecting judgments to the legal bases they refer to. Also, the graph was cleaned from small disconnected components, leaving single giant component. Consequently, the resulting graph is bipartite. We provide the dataset in both `JSON` and `PyG` formats, each has different purpose. While structurally graphs in these formats are the same, their attributes differ.

The `JSON` format is intended for analysis and contains most of the attributes available in [`JuDDGES/pl-court-raw`](https://huggingface.co/datasets/JuDDGES/pl-court-raw). We excluded some less-useful attributes and text content, which can be easily retrieved from the raw dataset and added to the graph as needed.

The `PyG` format is designed for machine learning applications, such as link prediction on graphs, and is fully compatible with the [`Pytorch Geometric`](https://github.com/pyg-team/pytorch_geometric) framework.

In the following sections, we provide a more detailed explanation and use case examples for each format.

## Dataset statistics

| feature | value |
|----------------------------|----------------------|
| #nodes | 369033 |
| #edges | 1131458 |
| #nodes (type=`judgment`) | 366212 |
| #nodes (type=`legal_base`) | 2819 |
| avg(degree) | 6.132015294025195 |


![png](../images/degree_distribution.png)



## `JSON` format

The `JSON` format contains graph node types differentiated by `node_type` attrbute. Each `node_type` has its additional corresponding attributes (see [`JuDDGES/pl-court-raw`](https://huggingface.co/datasets/JuDDGES/pl-court-raw) for detailed description of each attribute):

| node_type | attributes |
|--------------|---------------------------------------------------------------------------------------------------------------------|
| `judgment` | `_id`,`chairman`,`court_name`,`date`,`department_name`,`judges`,`node_type`,`publisher`,`recorder`,`signature`,`type` |
| `legal_base` | `isap_id`,`node_type`,`title` |

### Loading
Graph the `JSON` format is saved in node-link format, and can be readily loaded with `networkx` library:

```python
import json
import networkx as nx
from huggingface_hub import hf_hub_download

DATA_DIR = "<your_local_data_directory>"
JSON_FILE = "data/judgment_graph.json"
hf_hub_download(repo_id="JuDDGES/pl-court-graph", repo_type="dataset", filename=JSON_FILE, local_dir=DATA_DIR)

with open(f"{DATA_DIR}/{JSON_FILE}") as file:
g_data = json.load(file)

g = nx.node_link_graph(g_data)
```

### Example usage
```python
# TBD
```

## `PyG` format

The `PyTorch Geometric` format includes embeddings of the judgment content, obtained with [sdadas/mmlw-roberta-large](https://huggingface.co/sdadas/mmlw-roberta-large) for judgment nodes,
and one-hot-vector identifiers for legal-base nodes (note that for efficiency one can substitute it with random noise identifiers,
like in [(Abboud et al., 2021)](https://arxiv.org/abs/2010.01179)).



### Loading
In order to load graph as pytorch geometric, one can leverage the following code snippet
```python
import torch
import os
from torch_geometric.data import InMemoryDataset, download_url


class PlCourtGraphDataset(InMemoryDataset):
URL = (
"https://huggingface.co/datasets/JuDDGES/pl-court-graph/resolve/main/"
"data/pyg_judgment_graph.pt?download=true"
)

def __init__(self, root_dir: str, transform=None, pre_transform=None):
super(PlCourtGraphDataset, self).__init__(root_dir, transform, pre_transform)
data_file, index_file = self.processed_paths
self.load(data_file)
self.judgment_idx_2_iid, self.legal_base_idx_2_isap_id = torch.load(index_file).values()

@property
def raw_file_names(self) -> str:
return "pyg_judgment_graph.pt"

@property
def processed_file_names(self) -> list[str]:
return ["processed_pyg_judgment_graph.pt", "index_map.pt"]

def download(self) -> None:
os.makedirs(self.root, exist_ok=True)
download_url(self.URL + self.raw_file_names, self.raw_dir)

def process(self) -> None:
dataset = torch.load(self.raw_paths[0])
data = dataset["data"]

if self.pre_transform is not None:
data = self.pre_transform(data)

data_file, index_file = self.processed_paths
self.save([data], data_file)

torch.save(
{
"judgment_idx_2_iid": dataset["judgment_idx_2_iid"],
"legal_base_idx_2_isap_id": dataset["legal_base_idx_2_isap_id"],
},
index_file,
)

def __repr__(self) -> str:
return f"{self.__class__.__name__}({len(self)})"


ds = PlCourtGraphDataset(root_dir="data/datasets/pyg")
print(ds)
```

### Example usage
```python
# TBD
```
Binary file added nbs/images/degree_distribution.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions nbs/index.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,11 @@
"\n",
"### Installation\n",
"- to install necessary dependencies use available `Makefile`, you can use `python>=3.10`:\n",
" ```shell\n",
" ```\n",
" make install\n",
" ```\n",
"- if you want to run evaluation and fine-tuning with `unsloth`, use the following command with `python=3.10` inside conda environment:\n",
" ```shell\n",
" ```\n",
" make install_unsloth\n",
" ```\n",
"\n",
Expand Down
19 changes: 19 additions & 0 deletions nbs/sidebar.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
website:
sidebar:
contents:
- index.ipynb
- section: Data
contents:
- Data/00_prepare_instruction_dataset_for_ir.ipynb
- Data/01_Dataset_Description.ipynb
- Data/02_Analyse_sft.ipynb
- Data/03_Graph_Dataset_Description.ipynb
- Data/04_Graph_Analysis.ipynb
- section: Dataset Cards
contents:
- Dataset Cards/01_Dataset_Description_Raw.ipynb
- Dataset Cards/02_Dataset_Description_Instruct.ipynb
- Dataset Cards/03_Graph_Description.md
- section: Presentations
contents:
- Presentations/00_workshop_demo.ipynb

0 comments on commit afb4e4f

Please sign in to comment.