You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @wenwei202 ,
After finishing the three steps of Resnet20 baseline, SSL and finetuning, I am having trouble evaluating the performance of finetuned models.
I changed the conv mode to LOWERED_CSRMM but got an error as "Not Implemented yet". Besides, I have also tried the python script cifar10_classifier.py but it wouldn't work neither.
I am wondering how could I evaluate the inference performance of sparsified models?
Operating system: Ubuntu 16.04
Compiler:
CUDA version (if applicable): 9.0
CUDNN version (if applicable): 5
BLAS: MLK
Python or MATLAB version (for pycaffe and matcaffe respectively):
The text was updated successfully, but these errors were encountered:
Hi @wenwei202 ,
As I cannot test the performance of sparsified convolutions, it would be very nice of you to share the sparse scheme timing
Something similar to:
I0907 14:37:47.134873 26836 base_conv_layer.cpp:651] conv2 group 0: 320 us (Dense Scheme Timing)
There seems no performance results of sparse scheme in your code documentation.
Thanks a lot!
Hi @wenwei202 ,
After finishing the three steps of Resnet20 baseline, SSL and finetuning, I am having trouble evaluating the performance of finetuned models.
I changed the conv mode to LOWERED_CSRMM but got an error as "Not Implemented yet". Besides, I have also tried the python script cifar10_classifier.py but it wouldn't work neither.
I am wondering how could I evaluate the inference performance of sparsified models?
FYI, here's the error info:
CSRMM:
I0727 12:35:50.174033 1970 base_conv_layer.cpp:17] layer conv1 has sparsity of 0.0578704
I0727 12:35:50.174360 1970 base_conv_layer.cpp:29] ConvolutionParameter_ConvMode_LOWERED_CSRMM
F0727 12:35:50.174373 1970 math_functions.cpp:411] Not Implemented Yet
*** Check failure stack trace: ***
@ 0x7fb2ad0255cd google::LogMessage::Fail()
@ 0x7fb2ad027433 google::LogMessage::SendToLog()
@ 0x7fb2ad02515b google::LogMessage::Flush()
@ 0x7fb2ad027e1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7fb2ad6ca9d0 caffe::caffe_cpu_sparse_dense2csr<>()
@ 0x7fb2ad7fe3c9 caffe::BaseConvolutionLayer<>::WeightAlign()
@ 0x7fb2ad69308b caffe::Net<>::CopyTrainedLayersFrom()
@ 0x7fb2ad69d5f5 caffe::Net<>::CopyTrainedLayersFromBinaryProto()
@ 0x7fb2ad69d68e caffe::Net<>::CopyTrainedLayersFrom()
@ 0x40c222 time()
@ 0x407520 main
@ 0x7fb2abf95830 __libc_start_main
@ 0x407d49 _start
@ (nil) (unknown)
CCNMM:
I0727 11:48:19.621824 24561 base_conv_layer.cpp:17] layer res_grp1_1_conv1 has sparsity of 1
I0727 11:48:19.622687 24561 base_conv_layer.cpp:61] ConvolutionParameter_ConvMode_LOWERED_CCNMM
I0727 11:48:19.622701 24561 base_conv_layer.cpp:80] concatenating weight matrix
I0727 11:48:19.622706 24561 base_conv_layer.cpp:88] res_grp1_1_conv1 left_cols=0 left_rows=0
I0727 11:48:19.622711 24561 base_conv_layer.cpp:91] squeezing weight matrix
I0727 11:48:19.622715 24561 base_conv_layer.cpp:102] res_grp1_1_conv1 squeezing to 0x0
F0727 11:48:19.622720 24561 blob.cpp:131] Check failed: data_
*** Check failure stack trace: ***
@ 0x7f18331595cd google::LogMessage::Fail()
@ 0x7f183315b433 google::LogMessage::SendToLog()
@ 0x7f183315915b google::LogMessage::Flush()
@ 0x7f183315be1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7f18337ea41b caffe::Blob<>::mutable_cpu_data()
@ 0x7f18339330e5 caffe::BaseConvolutionLayer<>::WeightAlign()
@ 0x7f18337c709b caffe::Net<>::CopyTrainedLayersFrom()
@ 0x7f18337d1605 caffe::Net<>::CopyTrainedLayersFromBinaryProto()
@ 0x7f18337d169e caffe::Net<>::CopyTrainedLayersFrom()
@ 0x40c222 time()
@ 0x407520 main
@ 0x7f18320c9830 __libc_start_main
@ 0x407d49 _start
@ (nil) (unknown)
Best Regards,
Leo
Your system configuration
Operating system: Ubuntu 16.04
Compiler:
CUDA version (if applicable): 9.0
CUDNN version (if applicable): 5
BLAS: MLK
Python or MATLAB version (for pycaffe and matcaffe respectively):
The text was updated successfully, but these errors were encountered: