-
Face image inpainting based on texture and structure interaction
- ZHOU Zunfu, ZHANG Qian, SHI Jiliang, YUE Shiqin
-
Journal of Shandong University(Engineering Science). 2025, 55(4):
18-28.
doi:10.6040/j.issn.1672-3961.0.2024.047
-
Abstract
(
232 )
PDF (12707KB)
(
38
)
Save
-
References |
Related Articles |
Metrics
Aiming at the issue of losing contextual semantic information when extracting deep features by learning-based face image inpainting methods, a generator with an efficient normalized attention mechanism was proposed, which extracted deep features from face images more effectively and better aggregated low-level and high-level features at multiple scales. To enhance the consistency of the generated images, a bi-level gated feature fusion module with residual main path transformation was introduced. This module further fused decoded texture and structure information, and incorporated an enhanced contextual feature aggregation module, in which an improved prompt generation block enabled prompt parameters to interact between features at multiple scales, guiding the dynamic adjustment of the inpainting network to generate realistic and believable face images. Experimental results on the CelebA-HQ datasets showed that this research method achieved 37.74 dB, 0.983 0, 0.24%, and 1.489 in terms of peak signal-to-noise ratio (RPSN), structural similarity (SSIM), mean absolute error(EMA), and Fréchet inception distance (DFI). On the LFW dataset, the RPSN, SSIM, EMA, and DFI of this research method achieved 39.19 dB, 0.987 7, 0.21%, and 3.555. Compared with five other mainstream methods, this research method achieved quite competitive results. Qualitative and quantitative experiments demonstrated that this research method could effectively restore corrupted facial structure and texture information.