Risorsa Analitica di Seriale

Si trova su / Altri legami

© 1995–2012 IEEE.In this article we introduce a differentiable rendering module which allows neural networks to efficiently process 3D data. The module is composed of continuous piecewise differentiable functions defined as a sensor array of cells embedded in 3D space. Our module is learnable and can be easily integrated into neural networks allowing to optimize data rendering towards specific learning tasks using gradient based methods in an end–to–end fashion. Essentially, the module's sensor cells are allowed to transform independently and locally focus and sense different parts of the 3D data. Thus, through their optimization process, cells learn to focus on important parts of the data, bypassing occlusions, clutter, and noise. Since sensor cells originally lie on a grid, this equals to a highly non–linear rendering of the scene into a 2D image. Our module performs especially well in presence of clutter and occlusions as well as dealing with non–linear deformations to improve classification accuracy through proper rendering of the data. In our experiments, we apply our module in various learning tasks and demonstrate that using our rendering module we accomplish efficient classification, localization, and segmentation tasks on 2D/3D cluttered and non–cluttered data.


Articolo digitalizzato