Skip to content

Problem when quantizing models #485

Open
@CdAB63

Description

@CdAB63

Trying to use a quantized model returns:

$ python detect_video.py --video 0 --weights ./checkpoints/yolov4-tflite-416 --framework tflite

Weights: ./checkpoints/yolov4-tflite-416
Traceback (most recent call last):
File "detect_video.py", line 125, in
app.run(main)
File "/home/ubuntu/.local/lib/python3.8/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/ubuntu/.local/lib/python3.8/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "detect_video.py", line 40, in main
interpreter = tf.lite.Interpreter(model_path=FLAGS.weights)
File "/home/ubuntu/.local/lib/python3.8/site-packages/tensorflow/lite/python/interpreter.py", line 464, in init
self._interpreter = _interpreter_wrapper.CreateWrapperFromFile(
ValueError: Mmap of '4' at offset '0' failed with error '19'.

Weights set with:

$ python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4

and then:

python convert_tflite.py --quantize_mode int8 --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-int8.tflite

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions