Example: MVCNN with ModelNet40 -12 views
Overall
*Implemented with 2 typical backbone CNN, both SVCNN & MVCNN were trained once.
MVCNN | Model | All | cover 1 | cover 2 | cover 3 | cover 4 | cover 5 | cover 6 | cover 7 | cover 8 | cover 9 | cover 10 | cover 11 | cover 12 |
12 views | ResNet18 | 94.25 | 93.4 (-0.85) | 93.8 (-0.45) | 93.64 (-0.61) | 93.52 (-0.73) | 93.52 (-0.73) | 93.76 (-0.49) | 93.52 (-0.73) | 93.64 (-0.61) | 93.84 (-0.41) | 93.4 (-0.85) | 93.6 (-0.65) | 93.76 (-0.49) |
vgg11 | 91.29 | 91.41 (+0.12) | 91.17 (-0.12) | 91.05 (-0.24) | 91.09 (-0.2) | 91.45 (+0.16) | 91.45 (+0.16) | 91.0 (-0.29) | 91.21 (-0.08) | 91.05 (-0.24) | 90.96 (-0.33) | 91.13 (-0.16) | 91.29 (+0) |
- Roughly, View-1 & View-10 contribute to the most loss of Accuracy, View-9 on the contrary which makes the Accuracy lower in a minimum.
- Also, when using vgg11 as backbone and testing on same datasets, result changes a little, View-10 still contributes to some degree, however, View-5 & View-6 make the acc higher than normal.
- Remark: This is just a example to define the basic way of how to examine it, using pure color COVER to a specific view and generating a special dataset, to see the difference in accs.
- Pay attention: Here, it does not focus on the WEIGHTS, so it is quite simple and crude. Others are all considered with weights.