Story-Adapter: A Training-free Iterative Framework




For Long Story Visualization






*Equal Contribution

Abstract

Story visualization, the task of generating coherent images based on a narrative, has seen significant advancements with the emergence of text-to-image models, particularly diffusion models. However, maintaining semantic consistency, generating high-quality fine-grained interactions, and ensuring computational feasibility remain challenging, especially in long story visualization (i.e., up to 100 frames). In this work, we propose a training-free and computationally efficient framework, termed Story-Adapter, to enhance the generative capability of long stories. Specifically, we propose an iterative paradigm to refine each generated image, leveraging both the text prompt and all generated images from the previous iteration. Central to our framework is a training-free global reference cross-attention module, which aggregates all generated images from the previous iteration to preserve semantic consistency across the entire story, while minimizing computational costs with global embeddings. This iterative process progressively optimizes image generation by repeatedly incorporating text constraints, resulting in more precise and fine-grained interactions. Extensive experiments validate the superiority of Story-Adapter in improving both semantic consistency and generative capability for fine-grained interactions, particularly in long story scenarios.

Story-Adapter Architecture

Story-Adapter framework. Illustration of the proposed iterative paradigm, which consists of initialization, iterations in Story-Adapter, and implementation of Global Reference Cross-Attention (GRCA). Story-Adapter first visualizes each image only based on the text prompt of the story and uses all results as reference images for the future round. In the iterative paradigm, Story-Adapter inserts GRCA into SD. For the ith iteration of each image visualization, GRCA will aggregate the information flow of all reference images during the denoising process through cross-attention. All results from this iteration will be used as a reference image to guide the dynamic update of the story visualization in the next iteration.

Regular-length Story Visualization

A story of "Pigeon" visualized by our Story-Adapter
A story of "Dinosaur and Traveler" visualized by our Story-Adapter
A story of "Boy" visualized by our Story-Adapter
A story of "Pepper" visualized by our Story-Adapter
A story of "Gril" visualized by our Story-Adapter
A story of "Animal Rescuer" visualized by our Story-Adapter
A story of "City Monkey" visualized by our Story-Adapter
A story of "Old Man and Monkey" visualized by our Story-Adapter
A story of "The Boy's Journey" visualized by our Story-Adapter
A story of "A Day for a Girl" visualized by our Story-Adapter
A story of "Rain" visualized by our Story-Adapter
A story of "Fruit" visualized by our Story-Adapter

Long Story Visualization

A story of "Little Red Riding Hood" visualized by our Story-Adapter
A story of "Emperor and the Nightingale" visualized by our Story-Adapter
A story of "Robinson Crusoe" visualized by our Story-Adapter
A story of "Snowman" visualized by our Story-Adapter
A story of "Loyal Dog" visualized by our Story-Adapter
A story of "The Tortoise and the Hare" visualized by our Story-Adapter
A story of "Winnie the Pooh" visualized by our Story-Adapter
A story of "Pirate" visualized by our Story-Adapter
A story of "Lonely Me" visualized by our Story-Adapter
A story of "The Prince and the Princess" visualized by our Story-Adapter

Qualitative Comparison of Different Methods

Qualitative comparison of story visualization shows AR-LDM and StoryGen generate coherent image sequences but degrade with story length due to autoregressive errors. StoryDiffusion and Story-Adapter perform well, though StoryDiffusion struggles with subject consistency and ID image flaws due to high computation demands. Story-Adapter better meets the requirements for effective story visualization.

BibTeX

If you find our work helpful for your research, please consider giving a citation 📃


@misc{mao2024story_adapter,
  title={{Story-Adapter: A Training-free Iterative Framework for Long Story Visualization}},
  author={Mao, Jiawei and Huang, Xiaoke and Xie, Yunfei and Chang, Yuanqi and Hui, Mude and Xu, Bingjie and Zhou, Yuyin},
  journal={arXiv},
  volume={abs/2410.06244},
  year={2024},
}