-
Notifications
You must be signed in to change notification settings - Fork 943
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gcm.arrow_strength providing different ranking #1130
Comments
The arrow strength has some sampling for estimation which leads to variations between runs. You can reduce this by changing some parameters like Generally, if the rankings change that much between runs, it seems the connections are either equally strong or too weak in general (or the model simply isn't capturing them accurately enough). What is the range of the values? You can also take a look at estimating confidence intervals, they might provide better insights: |
Thanks for sharing the link and this is helpful.
|
This issue is stale because it has been open for 14 days with no activity. |
Sorry for the late reply!
You can take a look at https://github.com/py-why/causal-learn, this is a package for inferring the causal graph based on data.
You could try and set the parameter for the quality in the auto assignment function to |
This issue is stale because it has been open for 14 days with no activity. |
This issue was closed because it has been inactive for 7 days since being marked as stale. |
I am using arrow_strength function to identify top nodes showing variation in target node (Growth).
There are ~ 40 nodes. After sorting I am getting different answers, say if X node is ranked on 10th, in another iteration using same causal graph and data, it moves to 30th place or vice versa. Is this behavior expected? Does this depend on causal graph structure?
Version information:
The text was updated successfully, but these errors were encountered: