We currently only use Detectron2's DefaultPredictor for inference:
|
self.predictor = DefaultPredictor(cfg) |
But the documentation says:
This is meant for simple demo purposes, so it does the above steps automatically. This is not meant for benchmarks or running complicated inference logic. If you’d like to do anything more complicated, please refer to its source code as examples to build and use the model manually
One can clearly see how the GPU utilization is scarce, so a multi-threaded implementation with data pipelining would boost performance a lot.
We currently only use Detectron2's
DefaultPredictorfor inference:ocrd_detectron2/ocrd_detectron2/segment.py
Line 126 in 0272d95
But the documentation says:
One can clearly see how the GPU utilization is scarce, so a multi-threaded implementation with data pipelining would boost performance a lot.