Haoran MO |


Intelligent and Multimedia Science Laboratory

School of Computer Science and Engineering

Sun Yat-sen University (SYSU)

Guangzhou, China


mohaor (at) mail2.sysu.edu.cn


Github Google Scholar Resume


I am currently a third-year Ph.D. student in Intelligent and Multimedia Science Laboratory of Sun Yat-sen University (SYSU), co-supervised by Prof. Ruomei WANG and Prof. Chengying GAO. I was very lucky to collaborate with Prof. Changqing ZOU in Zhejiang University and Prof. Edgar Simo-Serra in Waseda University. I had a wonderful research intern at Simo-Serra Lab. (Waseda University, Tokyo, Japan) in 2019, working with Prof. Edgar Simo-Serra.

Research interests: Computer Graphics and Computer Vision, particularly in sketch understanding and generation, line drawing-based content generation (AIGC), and 2D/3D animation.



Selected Publications  (☞ All Publications)

'#' indicates equal contribution. '*' indicates corresponding author.

vectorization General Virtual Sketching Framework for Vector Line Art


Haoran Mo, Edgar Simo-Serra, Chengying Gao*, Changqing Zou and Ruomei Wang


ACM Transactions on Graphics (SIGGRAPH 2021, Journal track) (CCF-A)


Project Page Paper Supplementary Code Abstract Bibtex


Vector line art plays an important role in graphic design, however, it is tedious to manually create. We introduce a general framework to produce line drawings from a wide variety of images, by learning a mapping from raster image space to vector image space. Our approach is based on a recurrent neural network that draws the lines one by one. A differentiable rasterization module allows for training with only supervised raster data. We use a dynamic window around a virtual pen while drawing lines, implemented with a proposed aligned cropping and differentiable pasting modules. Furthermore, we develop a stroke regularization loss that encourages the model to use fewer and longer strokes to simplify the resulting vector image. Ablation studies and comparisons with existing methods corroborate the efficiency of our approach which is able to generate visually better results in less computation time, while generalizing better to a diversity of images and applications.

  title   = {General Virtual Sketching Framework for Vector Line Art},
  author  = {Mo, Haoran and Simo-Serra, Edgar and Gao, Chengying and Zou, Changqing and Wang, Ruomei},
  journal = {ACM Transactions on Graphics (TOG)},
  year    = {2021},
  volume  = {40},
  number  = {4},
  pages   = {51:1--51:14}
colorization Language-based Colorization of Scene Sketches


Changqing Zou#, Haoran Mo#(equal contribution), Chengying Gao*, Ruofei Du and Hongbo Fu


ACM Transactions on Graphics (SIGGRAPH Asia 2019, Journal track) (CCF-A)


Project Page Paper Supplementary Code Slide Abstract Bibtex


Being natural, touchless, and fun-embracing, language-based inputs have been demonstrated effective for various tasks from image generation to literacy education for children. This paper for the first time presents a language-based system for interactive colorization of scene sketches, based on semantic comprehension. The proposed system is built upon deep neural networks trained on a large-scale repository of scene sketches and cartoon-style color images with text descriptions. Given a scene sketch, our system allows users, via language-based instructions, to interactively localize and colorize specific foreground object instances to meet various colorization requirements in a progressive way. We demonstrate the effectiveness of our approach via comprehensive experimental results including alternative studies, comparison with the state-of-the-art methods, and generalization user studies. Given the unique characteristics of language-based inputs, we envision a combination of our interface with a traditional scribble-based interface for a practical multimodal colorization system, benefiting various applications.

  title   = {Language-based Colorization of Scene Sketches},
  author  = {Zou, Changqing and Mo, Haoran and Gao, Chengying and Du, Ruofei and Fu, Hongbo},
  journal = {ACM Transactions on Graphics (TOG)},
  year    = {2019},
  volume  = {38},
  number  = {6},
  pages   = {233:1--233:16}
icme2022 Unpaired Motion Style Transfer with Motion-oriented Projection Flow Network


Yue Huang, Haoran Mo, Xiao Liang and Chengying Gao*


IEEE International Conference on Multimedia & Expo (ICME 2022, Oral) (CCF-B)


Paper Abstract Bibtex


Existing motion style transfer methods trained with unpaired samples tend to generate motions with inconsistent content or inconsistent number of frames when compared with the source motion. Moreover, due to the limited training samples, these methods perform worse in unseen style. In this paper, we propose a novel unpaired motion style transfer framework that generates complete stylized motions with consistent content. We introduce a motion-oriented projection flow network (M-PFN) designed for temporal motion data, which encodes the content and style motions into latent codes and decodes the stylized features produced by adaptive instance normalization (AdaIN) into stylized motions. The M-PFN contains dedicated operations and modules, e.g., Transformer, to process the temporal information of motions, which help to improve the continuity of the generated motions. Comparisons with the state-of-the-art methods show that our method effectively transfers the style of the motions while retaining the complete content and has stronger generalization ability in unseen style features.

  title={Unpaired Motion Style Transfer with Motion-oriented Projection Flow Network},
  author={Huang, Yue and Mo, Haoran and Liang, Xiao and Gao, Chengying},
  booktitle={2022 IEEE International Conference on Multimedia and Expo (ICME)},
colorization-PG2021 Line Art Colorization Based on Explicit Region Segmentation


Ruizhi Cao, Haoran Mo and Chengying Gao*


Computer Graphics Forum (Pacific Graphics 2021) (CCF-B)


Paper Supplementary Code Abstract Bibtex


Automatic line art colorization plays an important role in anime and comic industry. While existing methods for line art colorization are able to generate plausible colorized results, they tend to suffer from the color bleeding issue. We introduce an explicit segmentation fusion mechanism to aid colorization frameworks in avoiding color bleeding artifacts. This mechanism is able to provide region segmentation information for the colorization process explicitly so that the colorization model can learn to avoid assigning the same color across regions with different semantics or inconsistent colors inside an individual region. The proposed mechanism is designed in a plug-and-play manner, so it can be applied to a diversity of line art colorization frameworks with various kinds of user guidances. We evaluate this mechanism in tag-based and reference-based line art colorization tasks by incorporating it into the state-of-the-art models. Comparisons with these existing models corroborate the effectiveness of our method which largely alleviates the color bleeding artifacts.

  title={Line Art Colorization Based on Explicit Region Segmentation},
  author={Cao, Ruizhi and Mo, Haoran and Gao, Chengying},
  booktitle={Computer Graphics Forum},
  organization={Wiley Online Library}

Outside Visiting


Simo-Serra Lab., Waseda University (Tokyo, Japan)     May-July 2019


Research Intern, advised by Prof. Edgar Simo-Serra.



"General Virtual Sketching Framework for Vector Line Art" at SIGGRAPH 2021 (virtual).


Aug. 2021


"General Virtual Sketching Framework for Vector Line Art" at CAD/Graphics 2021 in Xi'an, China.


May 2021


"Language-based Colorization of Scene Sketches" at SIGGRAPH Asia 2019 in Brisbane, Australia.


Nov. 2019




Welcome to view my gallery. I am a local Cantonese and an enthusiast for Cantonese Pop Music and Hong Kong films and TV series :)

web page statistics from GoStats