17 Sep 2019

【论文原文】StyleGAN-Wearing

Posted by GuoWY in Admin General
Generating High-Resolution Fashion Model Images Wearing Custom Outfits

Visualizing an outfit is an essential part of shopping for clothes. Due to the combinatorial aspect of combining fashion articles, the available images are limited to a pre- determined set of outfits. In this paper, we broaden these vi- sualizations by generating high-resolution images of fash- ion models wearing a custom outfit under an input body pose. We show that our approach can not only transfer the style and the pose of one generated outfit to another, but also create realistic images of human bodies and garments.

More
22 Sep 2019

【论文原文】StyleGAN-Embedder

Posted by GuoWY in Admin General
Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?

We propose an efficient algorithm to embed a given im- age into the latent space of StyleGAN. This embedding en- ables semantic image editing operations that can be applied to existing photographs. Taking the StyleGAN trained on the FFHQ dataset as an example, we show results for image morphing, style transfer, and expression transfer. Studying the results of the embedding algorithm provides valuable insights into the structure of the StyleGAN latent space. We proposeasetofexperimentstotestwhatclassofimagescan be embedded, how they are embedded, what latent space is suitable for embedding, and if the embedding is semanti- cally meaningful.

More
30 Sep 2019

【论文原文】Liquid Warping GAN

Posted by GuoWY in Admin General
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

We tackle the human motion imitation, appearance transfer, and novel view synthesis within a unified frame- work, which means that the model once being trained can be used to handle all these tasks. The existing task- specific methods mainly use 2D keypoints (pose) to esti- mate the human body structure. However, they only ex- presses the position information with no abilities to charac- terize the personalized shape of the individual person and model the limbs rotations. In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape, which can not only model the joint lo- cation and rotation but also characterize the personalized body shape. To preserve the source information, such as texture, style, color, and face identity, we propose a Liq- uid Warping GAN with Liquid Warping Block (LWB) that propagates the source information in both image and fea- ture spaces, and synthesizes an image with respect to the reference.

More
03 Nov 2019

【论文原文】Few-shot Vid-to-Vid

Posted by GuoWY in Admin General
Few-shot Video-to-Video Synthesis

Video-to-video synthesis ( vid2vid ) aims at converting an input semantic video, such as videos of human poses or segmentation masks, to an output photorealistic video. While the state-of-the-art of vid2vid has advanced significantly, existing approaches share two major limitations. First, they are data-hungry. Numerous images of a target human subject or a scene are required for training. Second, a learned model has limited generalization capability. A pose-to-human vid2vid model can only synthesize poses of the single person in the training set. It does not generalize to other humans that are not in the training set. To address the limitations, we propose a few-shot vid2vid framework, which learns to synthesize videos of previously unseen subjects or scenes by leveraging few example images of the target at test time.

More