--- license: apache-2.0 library_name: PaddleOCR language: - en - zh pipeline_tag: image-to-text tags: - OCR - PaddlePaddle - PaddleOCR - layout_detection --- # PicoDet_layout_1x_table ## Introduction A high-efficiency layout area localization model trained on a self-built dataset using PicoDet-1x, capable of detecting table regions. The key metrics are as follow: | Model| mAP(0.5) (%) | | --- | --- | |PicoDet_layout_1x_table | 97.5 | ## Quick Start ### Installation 1. PaddlePaddle Please refer to the following commands to install PaddlePaddle using pip: ```bash # for CUDA11.8 python -m pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/ # for CUDA12.6 python -m pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu126/ # for CPU python -m pip install paddlepaddle==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/ ``` For details about PaddlePaddle installation, please refer to the [PaddlePaddle official website](https://www.paddlepaddle.org.cn/en/install/quick). 2. PaddleOCR Install the latest version of the PaddleOCR inference package from PyPI: ```bash python -m pip install paddleocr ``` ### Model Usage You can quickly experience the functionality with a single command: ```bash paddleocr layout_detection \ --model_name PicoDet_layout_1x_table \ -i https://cdn-uploads.huggingface.co/production/uploads/63d7b8ee07cd1aa3c49a2026/N5C68HPVAI-xQAWTxpbA6.jpeg ``` You can also integrate the model inference of the layout detection module into your project. Before running the following code, please download the sample image to your local machine. ```python from paddleocr import LayoutDetection model = LayoutDetection(model_name="PicoDet_layout_1x_table") output = model.predict("N5C68HPVAI-xQAWTxpbA6.jpeg", batch_size=1, layout_nms=True) for res in output: res.print() res.save_to_img(save_path="./output/") res.save_to_json(save_path="./output/res.json") ``` After running, the obtained result is as follows: ```json {'res': {'input_path': '/root/.paddlex/predict_input/N5C68HPVAI-xQAWTxpbA6.jpeg', 'page_index': None, 'boxes': [{'cls_id': 0, 'label': 'Table', 'score': 0.9617661237716675, 'coordinate': [435.82446, 106.01748, 665.04346, 316.21014]}, {'cls_id': 0, 'label': 'Table', 'score': 0.9583022594451904, 'coordinate': [72.52834, 106.46287, 322.751, 301.454]}]}} ``` The visualized image is as follows: ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63d7b8ee07cd1aa3c49a2026/Ok0E4g-kTS6ttCh771wKd.jpeg) For details about usage command and descriptions of parameters, please refer to the [Document](https://paddlepaddle.github.io/PaddleOCR/latest/en/version3.x/module_usage/layout_detection.html#iii-quick-integration). ### Pipeline Usage The ability of a single model is limited. But the pipeline consists of several models can provide more capacity to resolve difficult problems in real-world scenarios. #### PP-TableMagic (table_recognition_v2) The General Table Recognition v2 pipeline (PP-TableMagic) is designed to tackle table recognition tasks, identifying tables in images and outputting them in HTML format. PP-TableMagic includes the following 8 modules: * Table Structure Recognition Module * Table Classification Module * Table Cell Detection Module * Text Detection Module * Text Recognition Module * Layout Region Detection Module (optional) * Document Image Orientation Classification Module (optional) * Text Image Unwarping Module (optional) You can quickly experience the PP-TableMagic pipeline with a single command. ```bash paddleocr table_recognition_v2 -i https://cdn-uploads.huggingface.co/production/uploads/63d7b8ee07cd1aa3c49a2026/tuY1zoUdZsL6-9yGG0MpU.jpeg \ --layout_detection_model_name PicoDet_layout_1x_table \ --use_doc_orientation_classify False \ --use_doc_unwarping False \ --save_path ./output \ --device gpu:0 ``` If save_path is specified, the visualization results will be saved under `save_path`. The command-line method is for quick experience. For project integration, also only a few codes are needed as well: ```python from paddleocr import TableRecognitionPipelineV2 pipeline = TableRecognitionPipelineV2( layout_detection_model_name=PicoDet_layout_1x_table, use_doc_orientation_classify=False, # Use use_doc_orientation_classify to enable/disable document orientation classification model use_doc_unwarping=False, # Use use_doc_unwarping to enable/disable document unwarping module device="gpu:0", # Use device to specify GPU for model inference ) output = pipeline.predict("tuY1zoUdZsL6-9yGG0MpU.jpeg") for res in output: res.print() ## Print the predicted structured output res.save_to_img("./output/") res.save_to_xlsx("./output/") res.save_to_html("./output/") res.save_to_json("./output/") ``` The default model used in pipeline is `PP-DocLayout-L`, so it is needed that specifing to `PicoDet_layout_1x_table` by argument `layout_detection_model_name`. And you can also use the local model file by argument `layout_detection_model_dir`. For details about usage command and descriptions of parameters, please refer to the [Document](https://paddlepaddle.github.io/PaddleOCR/main/en/version3.x/pipeline_usage/table_recognition_v2.html#2-quick-start). ## Links [PaddleOCR Repo](https://github.com/paddlepaddle/paddleocr) [PaddleOCR Documentation](https://paddlepaddle.github.io/PaddleOCR/latest/en/index.html)