Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: rng primitive refactoring #3040

Conversation

Alexandr-Solovev
Copy link
Contributor

@Alexandr-Solovev Alexandr-Solovev commented Jan 10, 2025

Description

Description:

Feature: RNG primitive refactoring

Summary:

This PR updates unifies API for oneDAL rng primitive functions. It includes various fixes and modifications for RNG primitive.

Key Changes:

  1. New generators have been added:

    • mrg32k3a and philox engines have been added in DAAL/oneDAL.
  2. HOST and Device engines have been refactored and added:

    • Opportunity to use RNG on all devices.

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev Alexandr-Solovev changed the title inc 1 feature: rng primitive refactoring Jan 22, 2025
@Alexandr-Solovev Alexandr-Solovev added the dpc++ Issue/PR related to DPC++ functionality label Jan 22, 2025
@Alexandr-Solovev Alexandr-Solovev marked this pull request as ready for review January 22, 2025 20:53
@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

correct last commit ci job
https://intel-ci.intel.com/efe23a65-55b1-f183-ad29-a4bf010d0e2d

Copy link
Contributor

@ethanglaser ethanglaser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly reviewed, looking good! Please add PR description. Docs are not added, so this checkbox should not be checked. But none of the rst files for other engines are >70 lines so this should be easily added in this PR.

/// @param[in, out] dst Pointer to the array to be shuffled.
/// @param[in] engine_ Reference to the device engine.
template <typename Type>
void partial_fisher_yates_shuffle(ndview<Type, 1>& result_array,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering, why was this combined into the host_engine file? Previously it was in its own

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because in general, I guess it makes sense to add all rng functions in one place and align API. I will do the same for rnd_seq in next prs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, although we are using it in kmeans_init as well

/// @param[in] method The rng engine type. Defaults to `mt19937`.
/// @param[in] deps Dependencies for the SYCL event.
template <typename Type>
sycl::event partial_fisher_yates_shuffle(sycl::queue& queue_,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to somehow wrap the host function to avoid code duplication? because it looks like everything aside from the engine and event logic are the exact same

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For sycl::event functions it will be reimplemented only on gpu(as it done for uniform). For host versions I will try to call host_engine funcs, but not sure about compatibility(likely in next pr)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I doublechecked and the blocker that we cant get host engine in device engine(different classes) and provide it in host_engine funcs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that this was implemented before this PR, but is there any test for the partial_fisher_yates_shuffle in order to show that the CPU and GPU versions are yielding the same/correct values? I know this is for uniform without replacement @ahuber21 (#2292), so it should be something relatively easy to test (doesn't seem like it or uniform_without_replacement is tested).

const DataType* val_arr_2_host_ptr = arr_2_host.get_data();

for (std::int64_t el = 0; el < arr_2_host.get_count(); el++) {
// Due to MKL inside generates floats on GPU and doubles on CPU, it makes sense to add minor eps.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe eps should be a function arg? also what is the range of values for results here - 0.01 seems a bit large

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More likely its due to mismatch inside mkl, and its not possible to somehow fix it on oneDAL side

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First test (host vs device) makes sense - what is goal of next two? and maybe they could be combined?

But overall host vs device test looks good, not sure if more test scope is required.

Copy link
Contributor Author

@Alexandr-Solovev Alexandr-Solovev Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the goal for the next two is:
1)Compare n GPU generated values + n CPU generated values vs 2n GPU generated values
2)Compare n GPU generated values + n CPU generated values vs 2n CPU generated values
Like a check of compatibility between gpu and cpu engines inside and continuous of generation process

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add some version of this explanation into the codebase as a comment (will be helpful for the future).

Copy link
Contributor

@icfaust icfaust left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So one thing that I don't like is that the skips which are necessary to keep the two RNGs in step are not done by the class itself, but the functions which call the class. This means that if someone were to touch the engine and try to use it outside of the functions like uniform or uniform_without_replacement they are likely going to mess the state up. At first this caused me a lot of confusion, seeing cpu calls in gpu functions, but I figured it out. Ideally when the RNG is asked for a count of RNG values, it should automatically skip ahead the other engine upon completion. This may impact runtime when successive small count queries are made, but I haven't looked closely yet in the dal side to see if that's the case in any of the functions. @Vika-F may weight in on this, I assume this was discussed in the design?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add some version of this explanation into the codebase as a comment (will be helpful for the future).

/// - `mt19937`: Standard Mersenne Twister engine with a period of \(2^{19937} - 1\).
/// - `mrg32k3a`: Combined multiple recursive generator with a period of \(2^{191}\).
/// - `philox4x32x10`: Counter-based RNG engine optimized for parallel computations.
enum class engine_type { mt2203, mcg59, mt19937, mrg32k3a, philox4x32x10 };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I notice there is no central definition in daal of the various engine like this. While this makes sense, it does make a larger maintenance burden (in other words, additions of additional RNGs if done so in the future will have to be added here to use in dal, not only in daal). Just wanted to note this, no need to change anything.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but I guess that the daal side addition should be copy paste with renaming

/// @param[in] method The rng engine type. Defaults to `mt19937`.
/// @param[in] deps Dependencies for the SYCL event.
template <typename Type>
sycl::event partial_fisher_yates_shuffle(sycl::queue& queue_,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that this was implemented before this PR, but is there any test for the partial_fisher_yates_shuffle in order to show that the CPU and GPU versions are yielding the same/correct values? I know this is for uniform without replacement @ahuber21 (#2292), so it should be something relatively easy to test (doesn't seem like it or uniform_without_replacement is tested).

throw domain_error(dal::detail::error_messages::unsupported_data_type());
}
void* state = engine_.get_host_engine_state();
engine_.skip_ahead_gpu(count);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am just wondering why leapfrog etc. are not possible, and why this is coded only to skip_ahead. If leapfrog isn't possible in the SYCL-side of the codebase, it would be good to add an explanation somewhere. If leapfrog works anywhere in the dal side, it would be good to leave hooks for leapfrog available, and generalize instead of only skip_ahead_gpu.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point. Leapfrog is available for now from oneMKL api, but I guess its okey to add it in next pr(for now its not necessary)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a comment if future revisions are planned

: count_(count),
base_seed_(seed) {
engines_.reserve(count_);
if (method == engine_type::mt2203) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know why this was done, but there should be a comment here for future people to understand why the MT2203 is a special case here (i.e. the family engine business).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point, I will add it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add before merge

Type* dst,
const event_vector& deps) {
switch (engine_.get_device_engine_base_ptr()->get_engine_type()) {
case engine_type::mt2203: {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it were me, I'd use a macro function to simplify this case statement since its just setting a template value, but I am not sure if that's kosher in the dal side of the codebase.

Copy link
Contributor

@Vika-F Vika-F Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason of doing the switch was to get rid of Engine template parameter in the distribution functions like uniform. Because the template there makes the code uglier.
Having switch here is the only option unfortunately as the compile-time information is lost when we are moving from particular engine types to general device_engine type.

Performance measurements shown that this switch does not have a negative impact on the performance. It is nothing comparing to the time of random numbers generation.

@Vika-F
Copy link
Contributor

Vika-F commented Feb 4, 2025

@icfaust, @Alexandr-Solovev

So one thing that I don't like is that the skips which are necessary to keep the two RNGs in step are not done by the class itself, but the functions which call the class.

Thank you for this observation. It would really be beneficial not to do skip_ahead_?pu on each uniform or other similar function call to synchronize the states of CPU and GPU engines.
We will definitely need to replace those skip_ahead_?pu calls with the counters updates.
But I propose to leave this change for on of the consequent PRs, as this refactoring PR is already pretty heavy.

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

Copy link
Contributor

@Vika-F Vika-F left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

Copy link
Contributor

@ethanglaser ethanglaser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doc review

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

@Alexandr-Solovev
Copy link
Contributor Author

/intelci: run

Copy link
Contributor

@ethanglaser ethanglaser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI and changes look good! Please add comments as discussed and for areas that will be improved in the future and address any remaining feedback as needed.

@Alexandr-Solovev Alexandr-Solovev merged commit 1969dec into uxlfoundation:main Feb 5, 2025
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dpc++ Issue/PR related to DPC++ functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants