Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

src: cpu: aarch64: lowp_matmul: Make weights constant #2194

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

fadara01
Copy link
Contributor

Description

Setting the weights as constant allows us to avoid redundant pretranspose and reduction operations in Arm Compute Library (ACL) every time execute is called (they are now run once and cached). This delives big speedups especially for relatively small matmuls.

Note that this is a temp fix that needs to be handled carefully by primitive caches in frameworks, since the ACL object is now holding more state - i.e. we want to make sure that the cahce maps a layer with a specific set of weights to the oneDNN primitive storing those weights.

We're currently working on the proper fix for this which involves making lowp_gemm stateless and fixed-format in ACL and oneDNN.

Fixes # (github issue)

Checklist

General

  • Do all unit and benchdnn tests (make test and make test_benchdnn_*) pass locally for each commit?
  • Have you formatted the code using clang-format?

Performance improvements

  • Have you submitted performance data that demonstrates performance improvements?

New features

  • Have you published an RFC for the new feature?
  • Was the RFC approved?
  • Have you added relevant tests?

Bug fixes

  • Have you included information on how to reproduce the issue (either in a github issue or in this PR)?
  • Have you added relevant regression tests?

RFC PR

  • Does RFC document follow the template?
  • Have you added a link to the rendered document?

Setting the weights as constant allows us to avoid redundant
pretranspose and reduction operations in Arm Compute Library (ACL)
every time execute is called (they are now run once and cached).
This delives big speedups especially for relatively small matmuls.

Note that this is a temp fix that needs to be handled carefully by
primitive caches in frameworks, since the ACL object is now holding
more state - i.e. we want to make sure that the cahce maps a layer
with a specific set of weights to the oneDNN primitive storing
those weights.

We're currently working on the proper fix for this which involves
making lowp_gemm stateless and fixed-format in ACL and oneDNN.
fadara01 added a commit to fadara01/ideep that referenced this pull request Oct 30, 2024
… lowp gemm

This goes in hand with oneapi-src/oneDNN#2194 which sets the weights
as constant for Arm Compute Library (ACL) lowp gemm objects.

This is a temp fix which is needed because the lowp gemm ACL object
in oneDNN is now holding  more state. Hence, we need to make sure that
the cahce maps a layer with a specific set of weights to the oneDNN
primitive storing those weights.

We're currently working on the proper fix for this which involves
making lowp_gemm stateless and fixed-format in ACL and oneDNN.
@@ -121,7 +121,7 @@ status_t acl_lowp_matmul_t::pd_t::init(engine_t *engine) {
= arm_compute::TensorInfo(arm_compute::TensorShape(N(), K()), 1,
arm_compute::DataType::QASYMM8_SIGNED,
arm_compute::QuantizationInfo(1.0, 0, true));
almc_.wei_tensor_info.set_are_values_constant(false);
almc_.wei_tensor_info.set_are_values_constant(true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does ACL assumes the shapes are constant, or the actual values in the tensor are constant?
In general, oneDNN does not have a concept of constant memory objects, so assuming values in the memory object are constant is unsafe.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants