You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an umbrella issue for tracking progress and discussing tasks relevant to improving existing benchmarking infrastructure and providing new features that would improve the experience.
The objective here is to prepare a solution which would improve the process of developing DSLX design with awareness of physical design limitations. It would be good to have an easy way of defining flows which would gather a number of design metrics useful in design development. This could be done with a dashboard rule (#1137) which runs various benchmarks and gathers the result in a form of html report.
Such solution would for sure be useful for example in the development of ZSTD codec (#1211).
One of the most important performance metrics is the minimal clock period allowed for the design. However, obtaining this metric in the accurate variant requires sampling multiple clock period values for a given design through Physical Design Flow done by OpenROAD and checking if timings are met. For that, an integration with a parameter optimization framework Vizier (#1160) would be a great addition.
In current form both efforts have a common component of executing various benchmarks and fetching performance metrics from them. Dashboard uses this for preparing HTML reports, while Vizier integration performs similar steps to evaluate suggested sets of parameter values.
I believe that this component should be extracted into a separate tool, as per suggestion from #1160 (comment), it should be usable outside of bazel, possibly something in the form of a run_benchmark.py from #963.
One of the issues regarding Physical Design flow is the difference between implementation in bazel_rules_hdl and in OpenROAD-Flow-Scripts which is described in hdl/bazel_rules_hdl#239.
The initial list of related issues and pull requests is presented below (includes inline links visible above):
This is an umbrella issue for tracking progress and discussing tasks relevant to improving existing benchmarking infrastructure and providing new features that would improve the experience.
The objective here is to prepare a solution which would improve the process of developing DSLX design with awareness of physical design limitations. It would be good to have an easy way of defining flows which would gather a number of design metrics useful in design development. This could be done with a
dashboard rule
(#1137) which runs various benchmarks and gathers the result in a form of html report.Such solution would for sure be useful for example in the development of ZSTD codec (#1211).
One of the most important performance metrics is the minimal clock period allowed for the design. However, obtaining this metric in the accurate variant requires sampling multiple
clock period
values for a given design throughPhysical Design Flow
done byOpenROAD
and checking if timings are met. For that, an integration with a parameter optimization frameworkVizier
(#1160) would be a great addition.In current form both efforts have a common component of executing various benchmarks and fetching performance metrics from them. Dashboard uses this for preparing HTML reports, while Vizier integration performs similar steps to evaluate suggested sets of parameter values.
I believe that this component should be extracted into a separate tool, as per suggestion from #1160 (comment), it should be usable outside of bazel, possibly something in the form of a
run_benchmark.py
from #963.Dashboard rule should be easy to extend with additional metrics as suggested in #1137 (comment) and #1137 (comment)
One of the issues regarding
Physical Design
flow is the difference between implementation inbazel_rules_hdl
and inOpenROAD-Flow-Scripts
which is described in hdl/bazel_rules_hdl#239.The initial list of related issues and pull requests is presented below (includes inline links visible above):
run_benchmarks
is referenced in the documentation and the codebase but is not available #963top
to the list of valid opt_ir flags #964remove_buffers
pass to floorplan stage hdl/bazel_rules_hdl#251CC @proppy
The text was updated successfully, but these errors were encountered: