Partition Speeds Up Learning Implicit Neural Representations Based on Exponential-Increase Hypothesis

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2024, 00, pp. 5451-5460
Issue Date:
2024-01-15
Filename Description Size
1722431.pdfPublished version10.75 MB
Adobe PDF
Full metadata record
Implicit neural representations INRs aim to learn a continuous function i e a neural network to represent an image where the input and output of the function are pixel coordinates and RGB Gray values respectively However images tend to consist of many objects whose colors are not perfectly consistent resulting in the challenge that image is actually a discontinuous piecewise function and cannot be well estimated by a continuous function In this paper we empirically investigate that if a neural network is enforced to fit a discontinuous piecewise function to reach a fixed small error the time costs will increase exponentially with respect to the boundaries in the spatial domain of the target signal We name this phenomenon the exponential increase hypothesis Under the exponential increase hypothesis learning INRs for images with many objects will converge very slowly To address this issue we first prove that partitioning a complex signal into several sub regions and utilizing piecewise INRs to fit that signal can significantly speed up the convergence Based on this fact we introduce a simple partition mechanism to boost the performance of two INR methods for image reconstruction one for learning INRs and the other for learning to learn INRs In both cases we partition an image into different sub regions and dedicate smaller networks for each part In addition we further propose two partition rules based on regular grids and semantic segmentation maps respectively Extensive experiments validate the effectiveness of the proposed partitioning methods in terms of learning INR for a single image ordinary learning framework and the learning to learn framework Code is released here
Please use this identifier to cite or link to this item: