How to find the size of a deep learning model? Demystifying Model Size Understanding Deep Learning Model Dimensions Have you ever wondered how much storage space your fancy deep learning model actually consu 2 min read 05-10-2024 13
How to manually dequantize the output of a layer and requantize it for the next layer in Pytorch? Dequantizing and Requantizing Intermediate Layers in Py Torch A Guide Problem Quantization is a powerful technique for compressing neural networks and improving 3 min read 04-10-2024 12
Cannot Export HuggingFace Model to ONNX with Optimum-CLI Troubleshooting Exporting Hugging Face Models to ONNX Using Optimum CLI When working with machine learning models especially those from the Hugging Face library 2 min read 26-09-2024 16
ONNX-Python: Can someone explain the Calibration_Data_Reader requested by the static_quantization-function? Understanding the Calibration Data Reader in ONNX Pythons Static Quantization Function When dealing with deep learning models quantization is a critical techniq 3 min read 26-09-2024 13
ValueError: You can't pass `load_in_4bit`or `load_in_8bit` as a kwarg when passing `quantization_config` argument at the same time Demystifying the Value Error You cannot simultaneously pass the load in 4bit or load in 8bit arguments Error in Hugging Face Transformers This error message is 2 min read 02-09-2024 18
What is the difference, if any, between model.half() and model.to(dtype=torch.float16) in huggingface-transformers? Demystifying model half vs model to dtype torch float16 in Hugging Face Transformers In the world of deep learning reducing model size and speeding up training 2 min read 31-08-2024 25
compress yolov8 object detection model (.pt file) Compressing YOL Ov8 Models A Guide to Reducing Your pt File Size YOL Ov8 with its impressive performance and user friendly interface is a popular choice for obj 3 min read 30-08-2024 25
Convert Quantization to Onnx Demystifying Onnx Conversion of Quantized Models A Case Study This article delves into the common issue of increased model size during ONNX conversion particula 3 min read 30-08-2024 14
Unable to build interpreter for TFLITE ViT-based image classifiers on Dart / Flutter: Didn't find op for builtin opcode 'CONV_2D' version '6' Troubleshooting TF Lite Interpreter Errors with Vi T Models in Flutter This article will guide you through resolving the error Didnt find op for builtin opcode 3 min read 28-08-2024 22
ValueError: ('Expected `model` argument to be a `Model` instance, got ', <keras.engine.sequential.Sequential object at 0x7f234263dfd0>) Understanding the Value Error Expected model argument to be a Model instance got keras engine sequential Sequential object at 0x7f234263dfd0 Error This error me 4 min read 27-08-2024 18