PBC:Image scaling/How it works

< PBC:Image scaling
Revision as of 11:26, 30 January 2021 by PeaceDeadC (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Comparison of scaling methods
Original photo Upscaled 2x Upscaled 4x Upscaled 6x Algorithm and description
600 × 863 reference
1,200 × 1,726 (2x)
2,400 × 3,452 (4x)
3,600 × 5,178 (6x)
Deep convolutional neural networks using perceptual loss. Developed on the basis of the super-resolution generative adversarial network (SRGAN) method, enhanced SRGAN (ESRGAN) is an incremental tweaking of the same generative adversarial network basis. Both methods rely on a perceptual loss function to evaluate training iterations.
304 × 443 reference
608 × 886 (2x)
1,216 × 1,772 (4x)
1,824 × 2,658 (6x)
Deep convolutional neural networks. Using machine learning, convincing details are generated as best guesses by learning common patterns from a training data set. The upscaled result is sometimes described as a hallucination because the information introduced may not correspond to the content of the source. Enhanced deep residual network (EDSR) methods have been developed by optimizing conventional residual neural network architecture. Programs that use this method include waifu2x, Imglarger and Neural Enhance.