Haoran MO |

莫浩然

Creative Intelligence and Synergy Lab

Computational Media and Arts (CMA), Information Hub

The Hong Kong University of Science and Technology (Guangzhou)

Guangzhou, China

Email:

haoranmo (at) hkust-gz.edu.cn / mohaor (at) mail2.sysu.edu.cn

 

Github Google Scholar Resume

 

I am currently a postdoctoral researcher in Creative Intelligence and Synergy Lab of The Hong Kong University of Science and Technology (Guangzhou), advised by Prof. Zeyu WANG. I received my Ph.D. degree in 2024 from Intelligent and Multimedia Science Laboratory of Sun Yat-sen University (SYSU), co-supervised by Prof. Ruomei WANG and Prof. Chengying GAO.
I was very lucky to collaborate with Prof. Changqing ZOU in Zhejiang University, Dr. Ruofei DU in Google Research and Prof. Edgar Simo-Serra in Waseda University. I had a wonderful research intern at Simo-Serra Lab. (Waseda University, Tokyo, Japan) in 2019, working with Prof. Edgar Simo-Serra.

Research interests: Computer Graphics and Computer Vision, particularly in sketch understanding and generation, line drawing-based content generation (AIGC), and 2D animation.



News


Experience

hkust-gz

The Hong Kong University of Science and Technology (Guangzhou)

Aug. 2024 - Now

 

Postdoctoral Researcher

waseda

Simo-Serra Lab., Waseda University (Tokyo, Japan)

May-July 2019

 

Research Intern, working with Prof. Edgar Simo-Serra.

huawei

Huawei

July-Sept. 2017

 

Intern, Software Engineer.


Education

sysu

Selected Publications  (☞ All Publications)

'#' indicates equal contribution. '*' indicates corresponding author.

tracing Joint Stroke Tracing and Correspondence for 2D Animation

 

Haoran Mo, Chengying Gao* and Ruomei Wang

 

ACM Transactions on Graphics (Presented at SIGGRAPH 2024) (CCF-A)

 

Project Page Paper Supplementary Code Abstract Bibtex

 

To alleviate human labor in redrawing keyframes with ordered vector strokes for automatic inbetweening, we for the first time propose a joint stroke tracing and correspondence approach. Given consecutive raster keyframes along with a single vector image of the starting frame as a guidance, the approach generates vector drawings for the remaining keyframes while ensuring one-to-one stroke correspondence. Our framework trained on clean line drawings generalizes to rough sketches and the generated results can be imported into inbetweening systems to produce inbetween sequences. Hence, the method is compatible with standard 2D animation workflow. An adaptive spatial transformation module (ASTM) is introduced to handle non-rigid motions and stroke distortion. We collect a dataset for training, with 10k+ pairs of raster frames and their vector drawings with stroke correspondence. Comprehensive validations on real clean and rough animated frames manifest the effectiveness of our method and superiority to existing methods.

@article{mo2024joint,
  title   = {Joint Stroke Tracing and Correspondence for 2D Animation},
  author  = {Mo, Haoran and Gao, Chengying and Wang, Ruomei},
  journal = {ACM Transactions on Graphics (TOG)},
  year    = {2024}
}
vectorization General Virtual Sketching Framework for Vector Line Art

 

Haoran Mo, Edgar Simo-Serra, Chengying Gao*, Changqing Zou and Ruomei Wang

 

ACM Transactions on Graphics (SIGGRAPH 2021, Journal track) (CCF-A)

 

Project Page Paper Supplementary Code Abstract Bibtex

 

Vector line art plays an important role in graphic design, however, it is tedious to manually create. We introduce a general framework to produce line drawings from a wide variety of images, by learning a mapping from raster image space to vector image space. Our approach is based on a recurrent neural network that draws the lines one by one. A differentiable rasterization module allows for training with only supervised raster data. We use a dynamic window around a virtual pen while drawing lines, implemented with a proposed aligned cropping and differentiable pasting modules. Furthermore, we develop a stroke regularization loss that encourages the model to use fewer and longer strokes to simplify the resulting vector image. Ablation studies and comparisons with existing methods corroborate the efficiency of our approach which is able to generate visually better results in less computation time, while generalizing better to a diversity of images and applications.

@article{mo2021virtualsketching,
  title   = {General Virtual Sketching Framework for Vector Line Art},
  author  = {Mo, Haoran and Simo-Serra, Edgar and Gao, Chengying and Zou, Changqing and Wang, Ruomei},
  journal = {ACM Transactions on Graphics (TOG)},
  year    = {2021},
  volume  = {40},
  number  = {4},
  pages   = {51:1--51:14}
}
colorization Language-based Colorization of Scene Sketches

 

Changqing Zou#, Haoran Mo#(equal contribution), Chengying Gao*, Ruofei Du and Hongbo Fu

 

ACM Transactions on Graphics (SIGGRAPH Asia 2019, Journal track) (CCF-A)

 

Project Page Paper Supplementary Code Slide Abstract Bibtex

 

Being natural, touchless, and fun-embracing, language-based inputs have been demonstrated effective for various tasks from image generation to literacy education for children. This paper for the first time presents a language-based system for interactive colorization of scene sketches, based on semantic comprehension. The proposed system is built upon deep neural networks trained on a large-scale repository of scene sketches and cartoon-style color images with text descriptions. Given a scene sketch, our system allows users, via language-based instructions, to interactively localize and colorize specific foreground object instances to meet various colorization requirements in a progressive way. We demonstrate the effectiveness of our approach via comprehensive experimental results including alternative studies, comparison with the state-of-the-art methods, and generalization user studies. Given the unique characteristics of language-based inputs, we envision a combination of our interface with a traditional scribble-based interface for a practical multimodal colorization system, benefiting various applications.

@article{zouSA2019sketchcolorization,
  title   = {Language-based Colorization of Scene Sketches},
  author  = {Zou, Changqing and Mo, Haoran and Gao, Chengying and Du, Ruofei and Fu, Hongbo},
  journal = {ACM Transactions on Graphics (TOG)},
  year    = {2019},
  volume  = {38},
  number  = {6},
  pages   = {233:1--233:16}
}
icme2024 Text-based Vector Sketch Editing with Image Editing Diffusion Prior

 

Haoran Mo, Xusheng Lin, Chengying Gao* and Ruomei Wang

 

IEEE International Conference on Multimedia & Expo (ICME 2024) (CCF-B)

 

Paper Supplementary Code Abstract Bibtex

 

We present a framework for text-based vector sketch editing to improve the efficiency of graphic design. The key idea behind the approach is to transfer the prior information from raster-level diffusion models, especially those from image editing methods, into the vector sketch-oriented task. The framework presents three editing modes and allows iterative editing. To meet the editing requirement of modifying the intended parts only while avoiding changing the other strokes, we introduce a stroke-level local editing scheme that automatically produces an editing mask reflecting locally editable regions and modifies strokes within the regions only. Comparisons with existing methods demonstrate the superiority of our approach.

@inproceedings{mo2024text,
  title={Text-based Vector Sketch Editing with Image Editing Diffusion Prior},
  author={Mo, Haoran and Lin, Xusheng and Gao, Chengying and Wang, Ruomei},
  booktitle={2024 IEEE International Conference on Multimedia and Expo (ICME)},
  pages={1--6},
  year={2024},
  organization={IEEE}
}
colorization-PG2021 Line Art Colorization Based on Explicit Region Segmentation

 

Ruizhi Cao, Haoran Mo and Chengying Gao*

 

Computer Graphics Forum (Pacific Graphics 2021) (CCF-B)

 

Paper Supplementary Code Abstract Bibtex

 

Automatic line art colorization plays an important role in anime and comic industry. While existing methods for line art colorization are able to generate plausible colorized results, they tend to suffer from the color bleeding issue. We introduce an explicit segmentation fusion mechanism to aid colorization frameworks in avoiding color bleeding artifacts. This mechanism is able to provide region segmentation information for the colorization process explicitly so that the colorization model can learn to avoid assigning the same color across regions with different semantics or inconsistent colors inside an individual region. The proposed mechanism is designed in a plug-and-play manner, so it can be applied to a diversity of line art colorization frameworks with various kinds of user guidances. We evaluate this mechanism in tag-based and reference-based line art colorization tasks by incorporating it into the state-of-the-art models. Comparisons with these existing models corroborate the effectiveness of our method which largely alleviates the color bleeding artifacts.

@inproceedings{cao2021line,
  title={Line Art Colorization Based on Explicit Region Segmentation},
  author={Cao, Ruizhi and Mo, Haoran and Gao, Chengying},
  booktitle={Computer Graphics Forum},
  volume={40},
  number={7},
  year={2021},
  organization={Wiley Online Library}
}
SketchyScene_eccv18 SketchyScene: Richly-Annotated Scene Sketches

 

Changqing Zou#, Qian Yu#, Ruofei Du, Haoran Mo, Yi-Zhe Song, Tao Xiang, Chengying Gao, Baoquan Chen* and Hao Zhang

 

European Conference on Computer Vision (ECCV 2018) (CCF-B)

 

Project Page Paper Poster Code Abstract Bibtex

 

We contribute the first large-scale dataset of scene sketches, SketchyScene, with the goal of advancing research on sketch understanding at both the object and scene level. The dataset is created through a novel and carefully designed crowdsourcing pipeline, enabling users to efficiently generate large quantities realistic and diverse scene sketches. SketchyScene contains more than 29,000 scene-level sketches, 7,000+ pairs of scene templates and photos, and 11,000+ object sketches. All objects in the scene sketches have ground-truth semantic and instance masks. The dataset is also highly scalable and extensible, easily allowing augmenting and/or changing scene composition. We demonstrate the potential impact of SketchyScene by training new computational models for semantic segmentation of scene sketches and showing how the new dataset enables several applications including image retrieval, sketch colorization, editing, and captioning, etc.

@inproceedings{zou2018sketchyscene,
  title={Sketchyscene: Richly-annotated scene sketches},
  author={Zou, Changqing and Yu, Qian and Du, Ruofei and Mo, Haoran and Song, Yi-Zhe and Xiang, Tao and Gao, Chengying and Chen, Baoquan and Zhang, Hao},
  booktitle={Proceedings of the european conference on computer vision (ECCV)},
  pages={421--436},
  year={2018}
}

Open-source Contributions


Awards


Academic Service


Media


guangdong

Welcome to view my gallery. I am a local Cantonese and an enthusiast for Cantonese Pop Music and Hong Kong films and TV series :)







web page statistics from GoStats