Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dataset Annotation #6

Open
tzktok opened this issue May 25, 2024 · 27 comments
Open

dataset Annotation #6

tzktok opened this issue May 25, 2024 · 27 comments

Comments

@tzktok
Copy link

tzktok commented May 25, 2024

I want fine tune the unitable model for my custom dataset...How to do the annotaion process is any tool available for ur annotation methods..
@matthewdhull @polochau @haekyu @helblazer811 @ShengYun-Peng

@ShengYun-Peng
Copy link
Contributor

Hi @tzktok, thanks for your interest! As stated in the paper, we used publicly available datasets while training UniTable. I will share the papers of these datasets below and their annotation processes may be helpful to you!

PubTabNet: https://github.com/ibm-aur-nlp/PubTabNet
SynthTabNet: https://arxiv.org/abs/2203.01017
FinTabNet: https://developer.ibm.com/exchanges/data/all/fintabnet/

@whalefa1I
Copy link

Hi @tzktok, thanks for your interest! As stated in the paper, we used publicly available datasets while training UniTable. I will share the papers of these datasets below and their annotation processes may be helpful to you!

PubTabNet: https://github.com/ibm-aur-nlp/PubTabNet SynthTabNet: https://arxiv.org/abs/2203.01017 FinTabNet: https://developer.ibm.com/exchanges/data/all/fintabnet/

I have used my own data to fine-tune the model, and the results have been very good. Thank you for your efforts. However, the inference speed does not meet my requirements. Are there any good methods to speed up inference? I have tried using TensorRT, but the improvement was not significant. Should I consider adding a KV cache to reduce the time spent on inference?

@ShengYun-Peng
Copy link
Contributor

Glad to know the finetuning went well! Yes, UniTable was implemented with vanilla transformer architecture. A kv-cache like the llama3 architecture here will largely speed up the inference. Interested in opening a PR?

@whalefa1I
Copy link

Glad to know the finetuning went well! Yes, UniTable was implemented with vanilla transformer architecture. A kv-cache like the llama3 architecture here will largely speed up the inference. Interested in opening a PR?

I will try to add this part, and when all goes well I will submit the pr~

@ShengYun-Peng
Copy link
Contributor

Thanks! I would recommend starting from implementing the kv-cache logic in the pipeline notebook and compare speed.

@tzktok
Copy link
Author

tzktok commented May 27, 2024

Hi @tzktok, thanks for your interest! As stated in the paper, we used publicly available datasets while training UniTable. I will share the papers of these datasets below and their annotation processes may be helpful to you!
PubTabNet: https://github.com/ibm-aur-nlp/PubTabNet SynthTabNet: https://arxiv.org/abs/2203.01017 FinTabNet: https://developer.ibm.com/exchanges/data/all/fintabnet/

I have used my own data to fine-tune the model, and the results have been very good. Thank you for your efforts. However, the inference speed does not meet my requirements. Are there any good methods to speed up inference? I have tried using TensorRT, but the improvement was not significant. Should I consider adding a KV cache to reduce the time spent on inference?

How u annotate your own dataset ?

@pincusz
Copy link

pincusz commented May 29, 2024

I'm also interested in training using my own dataset but have no idea where to start for annotating it. Any advice? I originally tried using the full_pipeline notebook but it did not create an accurate table from the image.

@lerndeep
Copy link

I also wants to train with custom dataset.
could you please share custom dataset preparation python file?

@lerndeep
Copy link

@whalefa1I
Could you please provide training script for unitable large for box,cell and contain train module size?

@Sanster
Copy link

Sanster commented May 31, 2024

@whalefa1I May I ask how much data did you use to train in your scenario?

@lerndeep
Copy link

lerndeep commented Jun 3, 2024

@whalefa1I
could you please share the custom dataset preparation script?

@whalefa1I
Copy link

@whalefa1I May I ask how much data did you use to train in your scenario?

30k maybe?Only Bbox model~

@whalefa1I
Copy link

@whalefa1I Could you please provide training script for unitable large for box,cell and contain train module size?

Maybe as long as you find the corresponding option in the CONFIG.mk file and configure it when running the Makefile with the exp name [EXP_$*], it should work, right? Do you want to convert it into a regular training script instead of using Hydra for configuration?

@whalefa1I
Copy link

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )

@lerndeep
Copy link

lerndeep commented Jun 4, 2024

@whalefa1I

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )
  1. using this you train for cell detection and content recognition right?
  2. Have you did pertaining or only fine-tuning?

In my case table have around 1000 cell so I don't know it will be good to fine-tune only by increase maxlen only work fine or not?

@whalefa1I
Copy link

Thanks! I would recommend starting from implementing the kv-cache logic in the pipeline notebook and compare speed.

It seems that because the decoder has only 4 layers or there may be an error in my implementation, the acceleration effect is not significant, achieving only a 7% speedup (varying with the number of bboxes). Due to the differences between the custom implementation of attention and the native torch attention (the MAE loss of the two types of attention is below e-8 in the first layer, but increases to 0.9 after subsequent cross-attention), it may be necessary to retrain the model. Additionally, I have replaced components using the llama decoder. If you are interested, I can send it to you.

@lerndeep
Copy link

lerndeep commented Jun 4, 2024

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )

Thank you for your sharing.
have you train for table structure part or not?
if yes how you labeled dataset at HTML format where colspan rowspan are presented?

@whalefa1I
Copy link

@whalefa1I

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )
  1. using this you train for cell detection and content recognition right?
  2. Have you did pertaining or only fine-tuning?

In my case table have around 1000 cell so I don't know it will be good to fine-tune only by increase maxlen only work fine or not?

This is an interesting issue. I am currently using the llama decoder to reproduce the model, and its special positional encoding might have some capability for length-extension. However, for your case, I think it might be difficult. The out-of-distribution (OOD) phenomenon is likely to be significant, and you may need more data to support 4k token output.

@whalefa1I
Copy link

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )

Thank you for your sharing. have you train for table structure part or not? if yes how you labeled dataset at HTML format where colspan rowspan are presented?

This is related to our annotation format. We generate HTML tags from bbox annotations using a set of heuristic rules, so the entire process only requires a bbox model.

@lerndeep
Copy link

lerndeep commented Jun 4, 2024

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )

Thank you for your sharing. have you train for table structure part or not? if yes how you labeled dataset at HTML format where colspan rowspan are presented?

This is related to our annotation format. We generate HTML tags from bbox annotations using a set of heuristic rules, so the entire process only requires a bbox model.

could you please let me know the process or code of heuristic rules to generate HTML from labelme json format?

it will be really helpful for me.

@Sanster
Copy link

Sanster commented Jun 4, 2024

@whalefa1I May I ask how much data did you use to train in your scenario?

30k maybe?Only Bbox model~

Thank you for your reply, I would also like to ask you a question, in your scenario, what are the advantages of using unitable, which obtains bbox coordinates through autoregressive methods, compared to using object detection models (such as YOLO)?

BTW, I added a decoder with kv-cache in this PR #11, which can achieve about a 30% improvement in inference speed with batch_size=1.

image

@whalefa1I
Copy link

whalefa1I commented Jun 4, 2024

@whalefa1I May I ask how much data did you use to train in your scenario?

30k maybe?Only Bbox model~

Thank you for your reply, I would also like to ask you a question, in your scenario, what are the advantages of using unitable, which obtains bbox coordinates through autoregressive methods, compared to using object detection models (such as YOLO)?

BTW, I added a decoder with kv-cache in this PR #11, which can achieve about a 30% improvement in inference speed with batch_size=1.

image

  1. Intuitively, direct object detection might not yield good results due to the presence of wireless tables or merged cells. Therefore, I have not trained a direct object detection model, but I am currently exploring related projects. This project has inspired me to modify the data annotation format, thereby reducing model calls. I have also compared other open-sourced tsr model and believe that the pretrained effects of unitable might be well transferred to my own dataset.
  2. Thank you for your PR on kv cache. May I ask if you are able to achieve the same effects as the original weights? I suspect there might be an issue with my implementation, as I have obtained inconsistent outputs and results compared to yours.

@Sanster
Copy link

Sanster commented Jun 4, 2024

@whalefa1I May I ask how much data did you use to train in your scenario?

30k maybe?Only Bbox model~

Thank you for your reply, I would also like to ask you a question, in your scenario, what are the advantages of using unitable, which obtains bbox coordinates through autoregressive methods, compared to using object detection models (such as YOLO)?
BTW, I added a decoder with kv-cache in this PR #11, which can achieve about a 30% improvement in inference speed with batch_size=1.
image

  1. Intuitively, direct object detection might not yield good results due to the presence of wireless tables or merged cells. Therefore, I have not trained a direct object detection model, but I am currently exploring related projects. This project has inspired me to modify the data annotation format, thereby reducing model calls. I have also compared other open-sourced tsr model and believe that the pretrained effects of unitable might be well transferred to my own dataset.
  2. Thank you for your PR on kv cache. May I ask if you are able to achieve the same effects as the original weights? I suspect there might be an issue with my implementation, as I have obtained inconsistent outputs and results compared to yours.

I checked the results of the images in the dataset/mini_pubtabnet/val directory through full_pipeline.ipynb, and based on the visualization results, the output is the same as the original model.

@xuzmocode4-325
Copy link

Hey @whalefa1I I'm wondering if you can assist.

I have a dataset that comprises of PDFs with matching XML in SVG tag format that is D3.js derived.

I have bbox and tokens for all the text, but since the images have to be resized, how do I ensure that the existing annotations will correspond with the downsampled images when fine-tuning?

Is the SVG tag structure useful? Would I need to add the SVG tags to the existing HTML vocab file?

Also, some tables overflow into different pages. When converting pdf2image, how can I maintain consistency of box locations for each image to source PDF?

@whalefa1I
Copy link

whalefa1I commented Jul 24, 2024

Hey @whalefa1I I'm wondering if you can assist.

I have a dataset that comprises of PDFs with matching XML in SVG tag format that is D3.js derived.

I have bbox and tokens for all the text, but since the images have to be resized, how do I ensure that the existing annotations will correspond with the downsampled images when fine-tuning?

Is the SVG tag structure useful? Would I need to add the SVG tags to the existing HTML vocab file?

Also, some tables overflow into different pages. When converting pdf2image, how can I maintain consistency of box locations for each image to source PDF?

  1. Could you please share some samples of your dataset so that I can see if they can be converted into the data format for my training?
  2. Since I have not finetuned the html model and content model, I don't know if this will help, but I tried to add the tag “border=1” of the wired/wireless table to the html tag in the early months. This requires adding the tag to the vocab.json file, and it works, so if you want the html model to generate related tokens, you can consider adding SVG tags to vocab file;
  3. Sorry, because I used PDF files to convert them into images and obtained the table area through the document layout analysis model. The cross-page tables were merged through specific business logic, so I did not consider the table merging logic in the general PDF scenario.

@xuzmocode4-325
Copy link

xuzmocode4-325 commented Jul 25, 2024

201124.pdf
Sample Log SVG

Hey @whalefa1I

  1. Could you please share some samples of your dataset so that I can see if they can be converted into the data format for my training?

Sure. I've shared a sample PDF with matching XML doc (SVG tag).

  1. Since I have not finetuned the html model and content model, I don't know if this will help, but I tried to add the tag “border=1” of the wired/wireless table to the html tag in the early months. This requires adding the tag to the vocab.json file, and it works, so if you want the html model to generate related tokens, you can consider adding SVG tags to vocab file;

Thanks, will try it this out.

@num3num
Copy link

num3num commented Jul 29, 2024

Hi, have you trained bbox with your own dataset? Can you share the specific steps?

@whalefa1I May I ask how much data did you use to train in your scenario?

30k maybe?Only Bbox model~

Hi, have you trained bbox with your own dataset? Can you share the specific steps?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants