-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: rng primitive refactoring #3040
feature: rng primitive refactoring #3040
Conversation
/intelci: run |
/intelci: run |
/intelci: run |
/intelci: run |
/intelci: run |
/intelci: run |
/intelci: run |
/intelci: run |
cpp/daal/src/algorithms/engines/mrg32k3a/mrg32k3a_batch_container.h
Outdated
Show resolved
Hide resolved
cpp/oneapi/dal/backend/primitives/rng/dpc_engine_collection.hpp
Outdated
Show resolved
Hide resolved
/intelci: run |
cpp/oneapi/dal/backend/primitives/rng/dpc_engine_collection.hpp
Outdated
Show resolved
Hide resolved
/intelci: run |
/intelci: run |
/intelci: run |
/intelci: run |
/intelci: run |
correct last commit ci job |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly reviewed, looking good! Please add PR description. Docs are not added, so this checkbox should not be checked. But none of the rst files for other engines are >70 lines so this should be easily added in this PR.
cpp/oneapi/dal/algo/decision_forest/backend/gpu/train_splitter_impl.hpp
Outdated
Show resolved
Hide resolved
/// @param[in, out] dst Pointer to the array to be shuffled. | ||
/// @param[in] engine_ Reference to the device engine. | ||
template <typename Type> | ||
void partial_fisher_yates_shuffle(ndview<Type, 1>& result_array, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering, why was this combined into the host_engine file? Previously it was in its own
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because in general, I guess it makes sense to add all rng functions in one place and align API. I will do the same for rnd_seq in next prs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, although we are using it in kmeans_init as well
/// @param[in] method The rng engine type. Defaults to `mt19937`. | ||
/// @param[in] deps Dependencies for the SYCL event. | ||
template <typename Type> | ||
sycl::event partial_fisher_yates_shuffle(sycl::queue& queue_, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to somehow wrap the host function to avoid code duplication? because it looks like everything aside from the engine and event logic are the exact same
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For sycl::event functions it will be reimplemented only on gpu(as it done for uniform). For host versions I will try to call host_engine funcs, but not sure about compatibility(likely in next pr)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I doublechecked and the blocker that we cant get host engine in device engine(different classes) and provide it in host_engine funcs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that this was implemented before this PR, but is there any test for the partial_fisher_yates_shuffle
in order to show that the CPU and GPU versions are yielding the same/correct values? I know this is for uniform without replacement @ahuber21 (#2292), so it should be something relatively easy to test (doesn't seem like it or uniform_without_replacement
is tested).
const DataType* val_arr_2_host_ptr = arr_2_host.get_data(); | ||
|
||
for (std::int64_t el = 0; el < arr_2_host.get_count(); el++) { | ||
// Due to MKL inside generates floats on GPU and doubles on CPU, it makes sense to add minor eps. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe eps should be a function arg? also what is the range of values for results here - 0.01 seems a bit large
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More likely its due to mismatch inside mkl, and its not possible to somehow fix it on oneDAL side
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First test (host vs device) makes sense - what is goal of next two? and maybe they could be combined?
But overall host vs device test looks good, not sure if more test scope is required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the goal for the next two is:
1)Compare n GPU generated values + n CPU generated values vs 2n GPU generated values
2)Compare n GPU generated values + n CPU generated values vs 2n CPU generated values
Like a check of compatibility between gpu and cpu engines inside and continuous of generation process
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add some version of this explanation into the codebase as a comment (will be helpful for the future).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So one thing that I don't like is that the skips which are necessary to keep the two RNGs in step are not done by the class itself, but the functions which call the class. This means that if someone were to touch the engine and try to use it outside of the functions like uniform
or uniform_without_replacement
they are likely going to mess the state up. At first this caused me a lot of confusion, seeing cpu calls in gpu functions, but I figured it out. Ideally when the RNG is asked for a count
of RNG values, it should automatically skip ahead the other engine upon completion. This may impact runtime when successive small count queries are made, but I haven't looked closely yet in the dal side to see if that's the case in any of the functions. @Vika-F may weight in on this, I assume this was discussed in the design?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add some version of this explanation into the codebase as a comment (will be helpful for the future).
/// - `mt19937`: Standard Mersenne Twister engine with a period of \(2^{19937} - 1\). | ||
/// - `mrg32k3a`: Combined multiple recursive generator with a period of \(2^{191}\). | ||
/// - `philox4x32x10`: Counter-based RNG engine optimized for parallel computations. | ||
enum class engine_type { mt2203, mcg59, mt19937, mrg32k3a, philox4x32x10 }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I notice there is no central definition in daal of the various engine like this. While this makes sense, it does make a larger maintenance burden (in other words, additions of additional RNGs if done so in the future will have to be added here to use in dal, not only in daal). Just wanted to note this, no need to change anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but I guess that the daal side addition should be copy paste with renaming
/// @param[in] method The rng engine type. Defaults to `mt19937`. | ||
/// @param[in] deps Dependencies for the SYCL event. | ||
template <typename Type> | ||
sycl::event partial_fisher_yates_shuffle(sycl::queue& queue_, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that this was implemented before this PR, but is there any test for the partial_fisher_yates_shuffle
in order to show that the CPU and GPU versions are yielding the same/correct values? I know this is for uniform without replacement @ahuber21 (#2292), so it should be something relatively easy to test (doesn't seem like it or uniform_without_replacement
is tested).
throw domain_error(dal::detail::error_messages::unsupported_data_type()); | ||
} | ||
void* state = engine_.get_host_engine_state(); | ||
engine_.skip_ahead_gpu(count); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am just wondering why leapfrog etc. are not possible, and why this is coded only to skip_ahead. If leapfrog isn't possible in the SYCL-side of the codebase, it would be good to add an explanation somewhere. If leapfrog works anywhere in the dal side, it would be good to leave hooks for leapfrog available, and generalize instead of only skip_ahead_gpu
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point. Leapfrog is available for now from oneMKL api, but I guess its okey to add it in next pr(for now its not necessary)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a comment if future revisions are planned
: count_(count), | ||
base_seed_(seed) { | ||
engines_.reserve(count_); | ||
if (method == engine_type::mt2203) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know why this was done, but there should be a comment here for future people to understand why the MT2203 is a special case here (i.e. the family engine business).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, I will add it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add before merge
Type* dst, | ||
const event_vector& deps) { | ||
switch (engine_.get_device_engine_base_ptr()->get_engine_type()) { | ||
case engine_type::mt2203: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it were me, I'd use a macro function to simplify this case statement since its just setting a template value, but I am not sure if that's kosher in the dal side of the codebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason of doing the switch was to get rid of Engine
template parameter in the distribution functions like uniform
. Because the template there makes the code uglier.
Having switch here is the only option unfortunately as the compile-time information is lost when we are moving from particular engine types to general device_engine
type.
Performance measurements shown that this switch does not have a negative impact on the performance. It is nothing comparing to the time of random numbers generation.
Thank you for this observation. It would really be beneficial not to do |
/intelci: run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/intelci: run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doc review
cpp/oneapi/dal/backend/primitives/rng/device_engine_collection.hpp
Outdated
Show resolved
Hide resolved
/intelci: run |
/intelci: run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI and changes look good! Please add comments as discussed and for areas that will be improved in the future and address any remaining feedback as needed.
Description
Description:
Feature: RNG primitive refactoring
Summary:
This PR updates unifies API for oneDAL rng primitive functions. It includes various fixes and modifications for RNG primitive.
Key Changes:
New generators have been added:
HOST and Device engines have been refactored and added:
PR completeness and readability
Testing
Performance