You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As noted in #74 (comment) it would be useful to include some tests that compare the performance of weights=True versus weights=False in order to detect any regression in the performance of DecodeNeurons relative to full-weights, and to serve as a basic benchmark for experimenting with variants and future improvements.
This could be made part of a more general research task that involves determining which situations make more-or-less of a difference.
The text was updated successfully, but these errors were encountered:
Using the trick in #230 (comment) here's a comparison of weights=True versus weights=False on a 6D Legendre Memory Unit (LMU):
weights=True
weights=False
With all-to-all weights we get a fairly stable history of the input. But with DecodeNeurons things blow up systematically (I've found some variations that look better, but they are qualitatively similar). This is running on the actual hardware on the master branch. I was seeing the same thing in my thesis (which also included the improvements in #132).
As noted in #74 (comment) it would be useful to include some tests that compare the performance of
weights=True
versusweights=False
in order to detect any regression in the performance ofDecodeNeurons
relative to full-weights, and to serve as a basic benchmark for experimenting with variants and future improvements.This could be made part of a more general research task that involves determining which situations make more-or-less of a difference.
The text was updated successfully, but these errors were encountered: