Abstract
This study originates a new model, the Feature Parallelism Model (FPM), and compares it to deep learning models along depth, which is the number of layers that comprises a machine learning model. It is the number of layers in the horizontal axis, in the case of FPM. We found that only 6 layers optimize FPM’s performance. FPM has been inspired by the human brain and follows some organizing principles that underlie the human visual system. We review here the standard practice in deep learning, which is opting in to the deepest model that the computational resources allow up to hundreds of layers, seeking better accuracies. We have implemented FPM using 5, 6, 7, and 8 layers and observed accuracy as well as training time for each. We show that much less depth is needed for FPM, down to 6 layers. This optimizes both accuracy and training time for the model. Moreover, in a previous study we have proposed the model and have shown that while FPM uses less computational resources proved by 21% reduction in training time, it performs as well as deep learning regarding models’ accuracy.