Deep-learning

  • Published on
    Last week I blogged about how Quantization can help you run your models on lower-powered hardware. In todays blog, I am extending the discussion further, talking about ONNX (Open Neural Network Exchange), which provides a standard format for representing machine learning models. This enables interoperability between frameworks and simplifies deployment across diverse hardware, including browser-based inference with onnxruntime-web. I have also included a demo to run a model in the browser.
  • Published on
    When storing data in memory, the data type used to represent the data has an impact on the memory usage and the performance of the overall system. Consider saving a number. On a high level, the number can either be an integer (whole number) or a floating-point number (number with decimal). Floating-point numbers can represent larger range of numbers with higher precision. Weights and biases in a large language model, which are learned during training and are used to make predictions, are stored as floating-point numbers to maintain high precision. The count of these parameters is what constitutes the size of the model, memory usage and how much computational resources are needed to run the model. In this post, we will discuss how quantization can be used to reduce the memory usage of models and improve performance (assuming the loss of precision is acceptable).