Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve benchmarking infrastructure #1226

Open
lpawelcz opened this issue Dec 12, 2023 · 0 comments
Open

Improve benchmarking infrastructure #1226

lpawelcz opened this issue Dec 12, 2023 · 0 comments

Comments

@lpawelcz
Copy link
Contributor

lpawelcz commented Dec 12, 2023

This is an umbrella issue for tracking progress and discussing tasks relevant to improving existing benchmarking infrastructure and providing new features that would improve the experience.

The objective here is to prepare a solution which would improve the process of developing DSLX design with awareness of physical design limitations. It would be good to have an easy way of defining flows which would gather a number of design metrics useful in design development. This could be done with a dashboard rule (#1137) which runs various benchmarks and gathers the result in a form of html report.
Such solution would for sure be useful for example in the development of ZSTD codec (#1211).

One of the most important performance metrics is the minimal clock period allowed for the design. However, obtaining this metric in the accurate variant requires sampling multiple clock period values for a given design through Physical Design Flow done by OpenROAD and checking if timings are met. For that, an integration with a parameter optimization framework Vizier (#1160) would be a great addition.

In current form both efforts have a common component of executing various benchmarks and fetching performance metrics from them. Dashboard uses this for preparing HTML reports, while Vizier integration performs similar steps to evaluate suggested sets of parameter values.
I believe that this component should be extracted into a separate tool, as per suggestion from #1160 (comment), it should be usable outside of bazel, possibly something in the form of a run_benchmark.py from #963.

Dashboard rule should be easy to extend with additional metrics as suggested in #1137 (comment) and #1137 (comment)

One of the issues regarding Physical Design flow is the difference between implementation in bazel_rules_hdl and in OpenROAD-Flow-Scripts which is described in hdl/bazel_rules_hdl#239.

The initial list of related issues and pull requests is presented below (includes inline links visible above):

CC @proppy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant