<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://jasperchennn.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://jasperchennn.github.io/" rel="alternate" type="text/html" /><updated>2026-03-25T15:13:22+00:00</updated><id>https://jasperchennn.github.io/feed.xml</id><title type="html">Jasper (Jintao Chen)</title><subtitle>Your Name&apos;s academic portfolio</subtitle><author><name>Jasper (Jintao Chen)</name><email>cjt@stu.pku.edu.cn</email></author><entry xml:lang="zh"><title type="html">[CVPR2026]AstraNav-Memory: Contexts Compression for Long Memory</title><link href="https://jasperchennn.github.io/astranav-memory-contexts-compression-long-memory/" rel="alternate" type="text/html" title="[CVPR2026]AstraNav-Memory: Contexts Compression for Long Memory" /><published>2025-12-25T00:00:00+00:00</published><updated>2025-12-25T00:00:00+00:00</updated><id>https://jasperchennn.github.io/astranav-memory-zh</id><content type="html" xml:base="https://jasperchennn.github.io/astranav-memory-contexts-compression-long-memory/"><![CDATA[<h2 id="摘要">摘要</h2>
<p>终身具身导航要求智能体能跨任务累积、保存并利用空间语义经验，从而在新环境中高效探索、在熟悉环境中快速抵达目标。现有以物体为中心的记忆框架虽具备可解释性，但依赖检测与重建流水线，鲁棒性和可扩展性受限。为此，本文提出<strong>AstraNav-Memory</strong>以图像为中心的记忆框架，通过高效的视觉上下文压缩模块与基于Qwen2.5-VL的导航策略端到端耦合，实现长时隐式记忆。该框架基于冻结DINOv3特征的ViT骨干网络，结合轻量级PixelUnshuffle+Conv块构建视觉tokenizer，支持可配置的压缩率——如16倍压缩设置下，每张图像仅编码为约30个token，将有效上下文容量从数十张图像扩展至数百张。在GOAT-Bench和HM3D-OVON基准上的实验结果表明，该方法取得了SOTA的导航性能，提升了陌生环境的探索效率，同时缩短了熟悉环境中的导航路径。消融实验进一步证明，适度的压缩率能在效率与精度间实现最优平衡。该研究证实，经压缩的以图像为中心的记忆框架可作为终身具身智能体的实用且可扩展的交互接口，使其能基于长时视觉历史进行推理，实现类人的高效导航。</p>

<h2 id="引言--背景">引言 / 背景</h2>
<p><strong>终身具身导航</strong>是机器人学与具身智能领域的核心研究方向，其核心需求是让智能体在长期、连续的导航任务中，持续积累空间语义记忆并灵活复用，从而在未知环境中减少无效探索，在已知环境中优化导航路径，实现类人的高效自主导航。</p>

<p>现有具身导航记忆框架主要以<strong>物体为中心</strong>构建，通过检测环境中的物体并重建物体间的空间关系形成记忆，虽能直观解释导航决策过程，但存在显著缺陷：一方面，物体检测与重建流水线对环境噪声、遮挡等情况敏感，鲁棒性不足；另一方面，该方式对物体特征的存储开销大，难以扩展到长时、大规模的视觉上下文，限制了智能体的长时记忆能力。</p>

<p>与此同时，以图像为中心的记忆方式虽能保留更完整的环境视觉信息，却因原始图像的token数量过多，导致上下文容量受限，无法实现长时视觉历史的存储与推理。因此，如何在保留有效视觉信息的前提下，对图像上下文进行高效压缩，提升具身智能体的长时记忆容量，成为终身具身导航的关键问题。</p>

<p>本文面向机器人学、具身智能、计算机视觉领域的研究人员与工程师，提出AstraNav-Memory框架，通过轻量级可配置的视觉上下文压缩模块，解决以图像为中心的记忆框架容量受限问题，为终身具身导航提供了高效、鲁棒、可扩展的长时记忆解决方案。</p>

<h2 id="方法--内容主体">方法 / 内容主体</h2>

<h3 id="1-问题定义">1. 问题定义</h3>
<p>本研究聚焦<strong>终身具身导航的长时记忆构建难题</strong>，核心目标是解决现有以物体为中心的记忆框架鲁棒性差、可扩展性低，以及以图像为中心的记忆框架视觉上下文容量受限的问题，构建一个<strong>高效、轻量、可配置</strong>的长时记忆框架，实现视觉上下文的有效压缩与长时存储，让具身智能体能基于数百张图像的长时视觉历史进行推理，提升在陌生环境的探索效率和熟悉环境的导航精度。</p>

<h3 id="2-解决思路--理论推导">2. 解决思路 / 理论推导</h3>
<p>提出<strong>AstraNav-Memory以图像为中心的长时记忆框架</strong>，核心思路是通过<strong>端到端耦合的视觉上下文压缩模块+导航策略</strong>，在保留环境关键空间语义信息的前提下，对图像视觉上下文进行高效压缩，扩展智能体的记忆容量，整体设计包括两大核心部分：</p>
<ol>
  <li><strong>可配置的轻量级视觉上下文压缩模块</strong>：以冻结DINOv3特征的ViT为骨干网络，结合轻量级<strong>PixelUnshuffle+Conv卷积块</strong>构建视觉tokenizer，支持灵活配置压缩率，通过对图像的视觉特征进行精细化压缩，在减少token数量的同时保留导航所需的核心空间语义信息，实现从“单张图像数百token”到“约30token”的高效压缩；</li>
  <li><strong>与导航策略的端到端耦合</strong>：将视觉上下文压缩模块与基于<strong>Qwen2.5-VL</strong>的多模态导航策略端到端联合训练，让压缩后的视觉token能直接被导航策略理解与推理，无需额外的特征转换模块，保证记忆利用的高效性，同时实现长时压缩视觉记忆与导航决策的协同优化。</li>
</ol>

<h3 id="3-实验设置--实现细节">3. 实验设置 / 实现细节</h3>
<ul>
  <li><strong>实验基准</strong>：在终身具身导航主流基准<strong>GOAT-Bench</strong>和<strong>HM3D-OVON</strong>上开展实验，分别验证模型在陌生环境的探索能力和熟悉环境的路径优化能力；</li>
  <li><strong>评测指标</strong>：从<strong>导航成功率</strong>、<strong>探索效率</strong>（无效探索步数占比）、<strong>路径长度</strong>、<strong>记忆利用效率</strong>（压缩率与性能的平衡）四个核心维度评估模型性能；</li>
  <li><strong>对比实验</strong>：与当前SOTA的以物体为中心的记忆框架、无压缩的以图像为中心的记忆框架进行对比，验证压缩模块的有效性；</li>
  <li><strong>消融实验</strong>：测试不同压缩率（如8×、16×、32×）对导航性能的影响，探索效率与精度的最优平衡；同时验证PixelUnshuffle+Conv块、冻结DINOv3特征等核心设计的必要性。</li>
</ul>

<h3 id="4-结果与分析">4. 结果与分析</h3>
<ol>
  <li><strong>SOTA性能验证</strong>：AstraNav-Memory在GOAT-Bench和HM3D-OVON基准上均取得<strong>最优的导航性能</strong>，相较于对比方法，陌生环境的探索效率显著提升，熟悉环境的导航路径平均长度大幅缩短；</li>
  <li><strong>压缩模块的有效性</strong>：经16倍压缩后，单张图像仅编码为约30个token，智能体的有效上下文容量从数十张图像扩展至数百张，实现了长时视觉历史的存储与推理；</li>
  <li><strong>压缩率的最优平衡</strong>：消融实验表明，<strong>适度的压缩率（16×）</strong> 能在记忆效率与导航精度间实现最优平衡，压缩率过低则记忆容量受限，过高则会丢失关键视觉信息导致导航性能下降；</li>
  <li><strong>鲁棒性与可扩展性</strong>：相较于以物体为中心的记忆框架，AstraNav-Memory无需依赖物体检测与重建，对环境噪声、遮挡的鲁棒性更强，同时可灵活调整压缩率，适配不同的记忆容量需求，具备良好的可扩展性。</li>
</ol>

<h2 id="总结与展望">总结与展望</h2>

<h3 id="主要收获">主要收获</h3>
<ol>
  <li>提出<strong>AstraNav-Memory以图像为中心的长时记忆框架</strong>，首次将可配置的视觉上下文压缩与具身导航策略端到端耦合，解决了传统记忆框架鲁棒性差、容量受限的核心问题，为终身具身导航提供了全新的记忆构建范式；</li>
  <li>设计了基于冻结DINOv3特征+PixelUnshuffle+Conv块的轻量级视觉tokenizer，支持灵活配置压缩率，在16倍压缩下实现单张图像30token的高效编码，将智能体的视觉上下文容量扩展至数百张图像；</li>
  <li>在GOAT-Bench和HM3D-OVON基准上取得SOTA导航性能，证实了经压缩的以图像为中心的记忆框架在陌生环境探索和熟悉环境导航中的优势；</li>
  <li>明确了适度压缩率是实现记忆效率与导航精度最优平衡的关键，为后续具身导航记忆框架的设计提供了重要的实验依据。</li>
</ol>

<h3 id="个人小结">个人小结</h3>
<p>长时记忆的核心并非无限制存储视觉信息，而是在“信息保留”与“存储效率”间找到最优解，以图像为中心的压缩记忆方式摆脱了物体检测的依赖，更贴合具身智能体的实际导航场景，而可配置的压缩率也让框架能适配不同的硬件与任务需求。</p>

<h2 id="参考资料">参考资料</h2>
<ul>
  <li><a href="https://arxiv.org/abs/2512.21627">AstraNav-Memory: Contexts Compression for Long Memory (arXiv)</a></li>
  <li><a href="https://doi.org/10.48550/arXiv.2512.21627">AstraNav-Memory DOI:10.48550/arXiv.2512.21627</a></li>
</ul>]]></content><author><name>Jasper (Jintao Chen)</name><email>cjt@stu.pku.edu.cn</email></author><category term="VLN" /><category term="长时记忆" /><category term="上下文压缩" /><category term="视觉tokenizer" /><category term="CVPR2026" /><summary type="html"><![CDATA[摘要 终身具身导航要求智能体能跨任务累积、保存并利用空间语义经验，从而在新环境中高效探索、在熟悉环境中快速抵达目标。现有以物体为中心的记忆框架虽具备可解释性，但依赖检测与重建流水线，鲁棒性和可扩展性受限。为此，本文提出AstraNav-Memory以图像为中心的记忆框架，通过高效的视觉上下文压缩模块与基于Qwen2.5-VL的导航策略端到端耦合，实现长时隐式记忆。该框架基于冻结DINOv3特征的ViT骨干网络，结合轻量级PixelUnshuffle+Conv块构建视觉tokenizer，支持可配置的压缩率——如16倍压缩设置下，每张图像仅编码为约30个token，将有效上下文容量从数十张图像扩展至数百张。在GOAT-Bench和HM3D-OVON基准上的实验结果表明，该方法取得了SOTA的导航性能，提升了陌生环境的探索效率，同时缩短了熟悉环境中的导航路径。消融实验进一步证明，适度的压缩率能在效率与精度间实现最优平衡。该研究证实，经压缩的以图像为中心的记忆框架可作为终身具身智能体的实用且可扩展的交互接口，使其能基于长时视觉历史进行推理，实现类人的高效导航。]]></summary></entry><entry xml:lang="en"><title type="html">AstraNav-World: World Model for Foresight Control and Consistency</title><link href="https://jasperchennn.github.io/astranav-world-foresight-control-en/" rel="alternate" type="text/html" title="AstraNav-World: World Model for Foresight Control and Consistency" /><published>2025-12-25T00:00:00+00:00</published><updated>2025-12-25T00:00:00+00:00</updated><id>https://jasperchennn.github.io/astranav-world-en</id><content type="html" xml:base="https://jasperchennn.github.io/astranav-world-foresight-control-en/"><![CDATA[<blockquote>
  <p>Note: this is the <strong>English version</strong> paired with the Chinese post
<code class="language-plaintext highlighter-rouge">AstraNav-World: World Model for Foresight Control and Consistency</code>.</p>
</blockquote>

<h2 id="abstract">Abstract</h2>

<p>We propose AstraNav-World, an end-to-end world model for embodied navigation
in open and dynamic environments. The model unifies multi-step visual prediction
and action sequence reasoning into a single probabilistic framework by combining
diffusion-based video generation with a vision-language policy. A bidirectional
constraint mechanism enforces both the executability of predicted futures and the
physical consistency of actions, which largely mitigates error accumulation in
the traditional “predict-then-plan” pipeline. Experiments on diverse navigation
benchmarks show improved trajectory accuracy and task success rate, and the model
exhibits strong zero-shot generalization in real-world tests.</p>

<h2 id="introduction">Introduction</h2>

<p>Here you can briefly motivate:</p>
<ul>
  <li>Why foresight control is crucial for embodied agents in open worlds;</li>
  <li>Limitations of decoupled prediction and planning pipelines;</li>
  <li>The target audience (CV / robotics / embodied AI researchers and engineers).</li>
</ul>

<h2 id="method">Method</h2>

<h3 id="1-problem-definition">1. Problem Definition</h3>

<p>Clearly state the embodied navigation setting and evaluation protocol, and
define what “foresight control” and “consistency” mean in this work.</p>

<h3 id="2-approach">2. Approach</h3>

<p>Describe the AstraNav-World architecture:</p>
<ul>
  <li>multi-module design combining diffusion video generator and VL policy;</li>
  <li>training objectives for action-conditioned visual prediction and
policy learning;</li>
  <li>bidirectional constraints that tie predicted futures to executable actions.</li>
</ul>

<h3 id="3-experiments">3. Experiments</h3>

<p>Summarize:</p>
<ul>
  <li>benchmarks, metrics and baselines;</li>
  <li>ablations on each key module and training objective;</li>
  <li>zero-shot transfer from simulation to real-world environments.</li>
</ul>

<h3 id="4-results-and-analysis">4. Results and Analysis</h3>

<p>Discuss:</p>
<ul>
  <li>trajectory and success-rate improvements;</li>
  <li>what happens when coupling between vision and action is removed;</li>
  <li>qualitative examples that illustrate better foresight and consistency.</li>
</ul>

<h2 id="conclusion-and-future-work">Conclusion and Future Work</h2>

<p>Highlight the main takeaways and outline:</p>
<ul>
  <li>deployment to real robots;</li>
  <li>extension to interaction / manipulation tasks;</li>
  <li>richer multi-modal inputs and better interpretability.</li>
</ul>

<h2 id="references">References</h2>

<ul>
  <li><a href="https://arxiv.org/abs/2512.21714">AstraNav-World: World Model for Foresight Control and Consistency (arXiv)</a></li>
  <li><a href="https://doi.org/10.48550/arXiv.2512.21714">DOI:10.48550/arXiv.2512.21714</a></li>
</ul>]]></content><author><name>Jasper (Jintao Chen)</name><email>cjt@stu.pku.edu.cn</email></author><category term="embodied navigation" /><category term="world model" /><category term="foresight control" /><category term="diffusion model" /><category term="vision-language policy" /><summary type="html"><![CDATA[Note: this is the English version paired with the Chinese post AstraNav-World: World Model for Foresight Control and Consistency.]]></summary></entry><entry xml:lang="zh"><title type="html">AstraNav-World: World Model for Foresight Control and Consistency</title><link href="https://jasperchennn.github.io/astranav-world-foresight-control/" rel="alternate" type="text/html" title="AstraNav-World: World Model for Foresight Control and Consistency" /><published>2025-12-25T00:00:00+00:00</published><updated>2025-12-25T00:00:00+00:00</updated><id>https://jasperchennn.github.io/astranav-world-zh</id><content type="html" xml:base="https://jasperchennn.github.io/astranav-world-foresight-control/"><![CDATA[<h3 id="个人小结">个人小结</h3>
<p>也是工作量极大的一篇工作，我其实比较惊喜的是在训练的时候仅用Lora就可以很快的学习来自VLA的planning特征，并且能够实现视觉预测和动作预测的高效统一。并且在结构上SkyReels-v4也有使用我们类似的MMFCA的方法，个人感觉在结构上，我们的模型还是比较超前的，并且里面也有非常多的小细节，后面可以细细描述一下。</p>

<p>另一个惊喜是泛化性能是真的挺好，绝对超过VLA！可以看论文里的proj页面。</p>

<h2 id="摘要">摘要</h2>
<p>针对开放动态环境下具身导航对世界演化和动作展开精准前瞻的需求，本文提出端到端世界模型 AstraNav-World，将未来视觉状态与动作序列推理融入统一概率框架。该模型融合扩散视频生成器与视觉语言策略，通过双向约束实现视觉预测的可执行性和决策的物理一致性，有效缓解解耦式流程的累积误差。实验表明，模型在各类具身导航基准中提升了轨迹精度和任务成功率，且在真实世界测试中展现出优异的零样本适配能力。</p>

<h2 id="方法--内容主体">方法 / 内容主体</h2>

<h3 id="1-问题定义">1. 问题定义</h3>
<p>聚焦<strong>开放动态环境下的具身导航问题</strong>，目标是让具身智能体精准预测未来多步视觉状态，并生成与预测视觉匹配、物理可行的动作轨迹，实现前瞻控制与环境演化的一致性，解决传统解耦方法累积误差、泛化能力弱等问题。</p>

<h3 id="2-解决思路--理论推导">2. 解决思路 / 理论推导</h3>
<p>提出 AstraNav-World 端到端世界模型，核心思路：</p>
<ol>
  <li><strong>多模块融合架构</strong>：整合扩散基视频生成器与视觉语言策略，支持预测场景和规划动作同步滚动更新；</li>
  <li><strong>双互补训练目标</strong>：
    <ul>
      <li>动作条件下的多步视觉预测</li>
      <li>基于预测视觉推导可行动作轨迹</li>
    </ul>
  </li>
  <li><strong>双向约束机制</strong>：使视觉预测具备可执行性，动作决策锚定在物理一致、与任务相关的未来场景，从根源缓解累积误差。</li>
</ol>

<h3 id="3-实验设置">3. 实验设置</h3>
<ul>
  <li>在<strong>多样化具身导航基准数据集</strong>上验证轨迹精度、任务成功率；</li>
  <li>设计<strong>消融实验</strong>验证各核心模块必要性；</li>
  <li>采用<strong>零样本设置</strong>：仅在仿真数据训练，不微调直接迁移到真实导航场景，测试泛化能力。</li>
</ul>

<h3 id="4-结果与分析">4. 结果与分析</h3>
<ol>
  <li><strong>基准实验</strong>：AstraNav-World 在各类具身导航基准中显著提升轨迹精度与任务成功率；</li>
  <li><strong>消融实验</strong>：移除视觉-动作耦合或统一训练框架后，视觉预测质量与策略可靠性明显下降；</li>
  <li><strong>真实世界零样本</strong>：模型无需真实世界微调即可适配未见场景，证明学到可迁移的空间理解与导航动态。</li>
</ol>

<h2 id="总结与展望">总结与展望</h2>

<h3 id="主要收获">主要收获</h3>
<ol>
  <li>提出 AstraNav-World 统一框架，实现前瞻视觉与动作控制深度耦合与同步推演；</li>
  <li>设计双互补训练与双向约束，提升预测可执行性与决策物理一致性；</li>
  <li>在仿真与真实世界零样本场景均取得优异性能，突破传统模型泛化瓶颈。</li>
</ol>

<h2 id="参考资料">参考资料</h2>
<ul>
  <li><a href="https://arxiv.org/abs/2512.21714">AstraNav-World: World Model for Foresight Control and Consistency (arXiv)</a></li>
  <li><a href="https://doi.org/10.48550/arXiv.2512.21714">DOI:10.48550/arXiv.2512.21714</a></li>
</ul>]]></content><author><name>Jasper (Jintao Chen)</name><email>cjt@stu.pku.edu.cn</email></author><category term="具身导航" /><category term="世界模型" /><category term="前瞻控制" /><category term="扩散模型" /><category term="视觉语言策略" /><summary type="html"><![CDATA[个人小结 也是工作量极大的一篇工作，我其实比较惊喜的是在训练的时候仅用Lora就可以很快的学习来自VLA的planning特征，并且能够实现视觉预测和动作预测的高效统一。并且在结构上SkyReels-v4也有使用我们类似的MMFCA的方法，个人感觉在结构上，我们的模型还是比较超前的，并且里面也有非常多的小细节，后面可以细细描述一下。]]></summary></entry><entry xml:lang="en"><title type="html">Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation</title><link href="https://jasperchennn.github.io/omni-effects-unified-spatial-controllable-vfx-generation-en/" rel="alternate" type="text/html" title="Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation" /><published>2025-08-11T00:00:00+00:00</published><updated>2025-08-11T00:00:00+00:00</updated><id>https://jasperchennn.github.io/omini-effects-xfx-en</id><content type="html" xml:base="https://jasperchennn.github.io/omni-effects-unified-spatial-controllable-vfx-generation-en/"><![CDATA[<blockquote>
  <p>Note: this is the <strong>English version</strong> paired with the Chinese post
<code class="language-plaintext highlighter-rouge">Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation</code>.</p>
</blockquote>

<h2 id="abstract">Abstract</h2>

<p>Visual effects (VFX) are central to modern video and film production.
Although recent video generation models enable low-cost VFX creation, they
are typically trained with single-effect LoRA adapters and therefore cannot
produce multiple effects at user-specified locations. To address cross-effect
interference and the lack of spatial controllability in joint multi-VFX training,
we propose Omni-Effects, the first unified framework for prompt-driven and
spatially controllable composite VFX generation. The core design includes:
1) a LoRA-based Mixture-of-Experts (LoRA-MoE) module that integrates diverse
effects in a single model while alleviating inter-task interference; and
2) a Spatial-Aware Prompt (SAP) module that injects spatial masks into text
tokens for precise spatial control, equipped with an Independent-Information
Flow (IIF) submodule to isolate control signals of different effects and avoid
unwanted blending. We further construct the Omni-VFX dataset and a dedicated
VFX evaluation protocol. Extensive experiments demonstrate that Omni-Effects
achieves accurate spatial control and diverse, high-quality effects, supporting
user-defined effect types and locations.</p>

<h2 id="introduction">Introduction</h2>

<p>Briefly introduce:</p>
<ul>
  <li>the role and cost of traditional VFX production;</li>
  <li>limitations of single-effect LoRA-based methods for real-world workflows;</li>
  <li>the need for unified, spatially-controllable multi-effect generation.</li>
</ul>

<h2 id="method">Method</h2>

<h3 id="1-problem-definition">1. Problem Definition</h3>

<p>Formulate unified VFX generation with:</p>
<ul>
  <li>multiple effect types;</li>
  <li>user-specified spatial regions;</li>
  <li>quality, independence and controllability requirements.</li>
</ul>

<h3 id="2-approach">2. Approach</h3>

<p>Describe:</p>
<ul>
  <li>the LoRA-MoE module: expert design, routing / combination strategy and
how it reduces cross-effect interference;</li>
  <li>the SAP module: how spatial masks are embedded into prompts;</li>
  <li>the IIF design: how information flow is separated across effects.</li>
</ul>

<h3 id="3-data-and-training">3. Data and Training</h3>

<p>Summarize the Omni-VFX dataset construction pipeline and the training setup
for the unified model.</p>

<h3 id="4-results-and-analysis">4. Results and Analysis</h3>

<p>Highlight:</p>
<ul>
  <li>single-effect quality vs. single-LoRA baselines;</li>
  <li>spatial accuracy compared with existing editing / generation methods;</li>
  <li>independence of multiple effects on the same frame.</li>
</ul>

<h2 id="conclusion-and-future-work">Conclusion and Future Work</h2>

<p>Summarize the contributions and outline:</p>
<ul>
  <li>extension to higher-resolution and production-grade VFX;</li>
  <li>better temporal modeling for long videos;</li>
  <li>interactive tools built on top of Omni-Effects.</li>
</ul>

<h2 id="references">References</h2>

<ul>
  <li><a href="https://arxiv.org/abs/2508.07981">Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation (arXiv)</a></li>
</ul>]]></content><author><name>Jasper (Jintao Chen)</name><email>cjt@stu.pku.edu.cn</email></author><category term="visual effects generation" /><category term="VFX" /><category term="spatial control" /><category term="LoRA-MoE" /><category term="multi-effects composition" /><category term="computer vision" /><summary type="html"><![CDATA[Note: this is the English version paired with the Chinese post Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation.]]></summary></entry><entry xml:lang="zh"><title type="html">[AAAI2026]Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation</title><link href="https://jasperchennn.github.io/omni-effects-unified-spatial-controllable-vfx-generation/" rel="alternate" type="text/html" title="[AAAI2026]Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation" /><published>2025-08-11T00:00:00+00:00</published><updated>2025-08-11T00:00:00+00:00</updated><id>https://jasperchennn.github.io/omini-effects-xfx-zh</id><content type="html" xml:base="https://jasperchennn.github.io/omni-effects-unified-spatial-controllable-vfx-generation/"><![CDATA[<h3 id="个人小结">个人小结</h3>
<p>做了很久，工作量有点大。。讲一下这个工作的心路历程吧，一开始是想简单的做一个统一的能同时生成多个特效的视频，但是后面发现有些特效其实比较难以兼容，并且有些特效是个体级别的（比如物体消失、爆炸），有些则是画面级别的（比如天降大雪、花花世界这种）。而且当时发现市面上还没有人做这种可控制的协同多特效合成的工作，所以就由着这条路继续走了，一开始当然是沿着ControlNet走的，但是确实会引入较大的计算量，后面发现了EasyConrtol，觉得这种在attention级别的mask实现会更好一些，但最后效果上个人感觉差异不是很大。</p>

<p>我觉得本工作算是第一个走通可控制的协同多特效合成的，但是更promissing的应该是视频运动clone（比如后面的VideoAsPrompt）。不仅仅包括特效，在具身领域也有比较强的应用，比如从人手运动迁移至机械臂运动。</p>

<h2 id="摘要">摘要</h2>
<p>视觉特效（VFX）是现代影视制作中核心的视觉增强手段，现有视频生成模型虽为VFX制作提供了低成本解决方案，但受限于单特效LoRA训练，仅能生成单一特效，无法实现指定位置的多特效协同生成。针对多VFX联合训练中存在的特效间干扰、空间可控性缺失等问题，本文提出首个支持提示词引导与空间可控复合特效生成的统一框架Omni-Effects。该框架包含两大核心创新：基于LoRA的混合专家模块（LoRA-MoE），在统一模型中融合多样特效并有效缓解跨任务干扰；空间感知提示词模块（SAP），将空间掩码信息融入文本令牌实现精准的空间控制，且其内置的独立信息流（IIF）模块能隔离各特效控制信号，避免非预期的特效融合。同时，本文构建了综合VFX数据集Omni-VFX，并设计了专用的VFX性能评估框架。大量实验表明，Omni-Effects可实现精准的空间控制与多样化的特效生成，支持用户自定义特效类别与生成位置。</p>

<h2 id="方法--内容主体">方法 / 内容主体</h2>

<h3 id="1-问题定义">1. 问题定义</h3>
<p>本研究聚焦<strong>视觉特效（VFX）生成的核心痛点</strong>，解决现有方法仅能生成单一特效、多特效联合训练存在跨任务干扰、特效生成缺乏精准空间控制的问题，核心目标是构建一个<strong>统一的VFX生成框架</strong>，实现<strong>多类特效的融合生成</strong>与<strong>指定位置的空间可控合成</strong>，支持用户通过提示词自定义特效类别与生成区域，满足实际视觉内容制作的复合特效需求。</p>

<h3 id="2-解决思路--理论推导">2. 解决思路 / 理论推导</h3>
<p>提出Omni-Effects统一视觉特效生成框架，核心思路是通过<strong>模块化的架构设计</strong>，分别解决多特效融合的干扰问题与空间可控性问题，实现提示词引导的定制化复合VFX生成，框架的两大核心创新模块及关键设计如下：</p>
<ol>
  <li><strong>LoRA-based Mixture of Experts（LoRA-MoE）</strong>：引入混合专家机制，构建一组针对不同特效的专家LoRA，将多样特效的特征表达整合到统一模型中，同时通过专家选择机制有效缓解不同特效间的跨任务干扰，保证单特效与多特效生成的质量；</li>
  <li><strong>Spatial-Aware Prompt（SAP）</strong>：将空间掩码信息融入文本提示词的令牌中，让模型捕捉特效生成的空间位置信息，实现特效的精准空间控制；同时SAP内置<strong>Independent-Information Flow (IIF)</strong> 模块，对不同特效的控制信号进行隔离，避免多特效合成时出现非预期的特征混合，保证各特效的独立性。</li>
</ol>

<p>此外，为支撑该研究的实验与验证，设计了全新的<strong>数据收集流水线</strong>（融合图像编辑与首尾帧到视频FLF2V合成），构建了综合的VFX专用数据集Omni-VFX，并制定了<strong>专用的VFX性能评估框架</strong>，实现对模型特效生成质量、空间可控性、多样性的全面验证。</p>

<figure>
  <img src="/images/posts/omni-effects-vfx-generation-20250811/overview.png" alt="简短说明" style="max-width:100%;" />
  <figcaption>OmniEffcets整体框架</figcaption>
</figure>

<figure>
  <img src="/images/posts/omni-effects-vfx-generation-20250811/attn.png" alt="简短说明" style="max-width:100%;" />
  <figcaption>SAP&amp;IIF</figcaption>
</figure>

<h3 id="3-实验设置--实现细节">3. 实验设置 / 实现细节</h3>
<ul>
  <li><strong>实验数据集</strong>：基于自研的图像编辑+FLF2V合成流水线构建的专用VFX数据集Omni-VFX，覆盖多类常见视觉特效，包含精准的空间位置标注；</li>
  <li><strong>评价维度</strong>：从<strong>特效生成质量</strong>、<strong>空间控制精度</strong>、<strong>多特效合成的独立性</strong>、<strong>特效生成多样性</strong>四个核心维度对模型性能进行验证；</li>
  <li><strong>对比实验</strong>：与当前主流的单特效LoRA微调方法、通用视觉生成模型进行对比，验证Omni-Effects在多特效融合、空间可控性上的优势；</li>
  <li><strong>框架验证</strong>：通过针对性实验验证LoRA-MoE、SAP、IIF等核心模块的有效性，验证各模块对解决跨任务干扰、实现空间控制的关键作用。</li>
</ul>

<h3 id="4-结果与分析">4. 结果与分析</h3>
<ol>
  <li><strong>多特效生成能力</strong>：Omni-Effects成功在统一框架中融合了多类视觉特效的生成能力，且LoRA-MoE模块有效缓解了跨任务干扰，单特效生成质量与专用LoRA微调方法持平，多特效合成时无明显质量衰减；</li>
  <li><strong>空间控制精度</strong>：SAP模块将空间掩码与文本提示词融合，实现了特效在指定位置的精准生成，空间定位误差显著低于对比方法，满足定制化VFX制作的空间控制需求；</li>
  <li><strong>多特效合成独立性</strong>：IIF模块有效隔离了不同特效的控制信号，多特效在同一画面的指定位置合成时，未出现非预期的特效混合，各特效的视觉特征保持独立；</li>
  <li><strong>整体性能</strong>：大量实验表明，Omni-Effects在特效生成的多样性、质量、空间可控性上均表现优异，支持用户灵活指定特效的类别与生成位置，实现定制化的复合视觉特效生成。</li>
</ol>

<h2 id="参考资料">参考资料</h2>
<ul>
  <li><a href="https://arxiv.org/abs/2508.07981">Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation (arXiv)</a>
<!-- - [Omni-Effects DOI:10.48550/arXiv.2508.07981](https://doi.org/10.48550/arXiv.2508.07981) --></li>
</ul>]]></content><author><name>Jasper (Jintao Chen)</name><email>cjt@stu.pku.edu.cn</email></author><category term="VFX" /><category term="LoRA-MoE" /><category term="多特效合成" /><category term="AAAI2026" /><summary type="html"><![CDATA[个人小结 做了很久，工作量有点大。。讲一下这个工作的心路历程吧，一开始是想简单的做一个统一的能同时生成多个特效的视频，但是后面发现有些特效其实比较难以兼容，并且有些特效是个体级别的（比如物体消失、爆炸），有些则是画面级别的（比如天降大雪、花花世界这种）。而且当时发现市面上还没有人做这种可控制的协同多特效合成的工作，所以就由着这条路继续走了，一开始当然是沿着ControlNet走的，但是确实会引入较大的计算量，后面发现了EasyConrtol，觉得这种在attention级别的mask实现会更好一些，但最后效果上个人感觉差异不是很大。]]></summary></entry><entry xml:lang="zh"><title type="html">[CVPR]UniEdit-I: Training-free Image Editing for Unified VLM via Iterative Understanding, Editing and Verifying</title><link href="https://jasperchennn.github.io/uniedit-i-training-free-vlm-image-editing/" rel="alternate" type="text/html" title="[CVPR]UniEdit-I: Training-free Image Editing for Unified VLM via Iterative Understanding, Editing and Verifying" /><published>2025-08-05T00:00:00+00:00</published><updated>2025-08-05T00:00:00+00:00</updated><id>https://jasperchennn.github.io/uniedit-zh</id><content type="html" xml:base="https://jasperchennn.github.io/uniedit-i-training-free-vlm-image-editing/"><![CDATA[<h2 id="个人小结">个人小结</h2>
<p>FlowEdit当时被我们发现之后，并在Wan等模型上做了复现，发现效果出奇的好。当时的一条思路是沿着FlowEdit做一些运动迁移的任务，有了一些效果但是FlowEdit有着明显的无法做形状差异过大的编辑：比如把大象变成一只小狗，或者删除某个物体。但是如果有个生成语义token而不是像素级Token的DiT，这些问题似乎就可以迎刃而解了，比如Blip-3里面的生成clip token的DiT以及RAE、ScaleRAE等工作。并且语义级token的好处是可以无缝接入一个understanding expert，自动化评判编辑强度。</p>

<h2 id="摘要">摘要</h2>
<p>统一视觉语言模型（VLM）可在单一框架内完成视觉理解与生成任务，OpenAI GPT-4o提出的<strong>理解型VLM-视觉特征-投影器-扩散模型-图像</strong>生成流水线，在冻结理解型VLM的同时仅训练生成相关模块，既保留了VLM强大的理解能力，又赋予其图像生成能力。但该流水线尚未探索如何便捷赋予统一VLM图像编辑能力，为此本文提出全新的无训练图像编辑框架<strong>UniEdit-I</strong>，通过<strong>理解、编辑、验证</strong>三步迭代流程，让统一VLM无需训练即可具备图像编辑能力。理解阶段通过结构化语义分析生成源提示词，并根据编辑指令完成最小化词汇替换得到目标提示词；编辑阶段引入时间自适应偏移量，实现去噪过程中从粗到细的连贯编辑；验证阶段校验目标提示词与中间编辑图像的对齐度，生成一致性评分与修正反馈并决定是否提前终止迭代。该迭代循环直至收敛，以无训练方式实现高保真的图像编辑效果。本文基于最新的BLIP3-o实现该方法，在GEdit-Bench基准测试中取得了SOTA性能。</p>

<h2 id="方法--内容主体">方法 / 内容主体</h2>

<h3 id="1-问题定义">1. 问题定义</h3>
<p>本研究聚焦<strong>统一视觉语言模型（VLM）的图像编辑痛点</strong>，解决OpenAI GPT-4o提出的生成流水线无法便捷实现图像编辑、现有方法需训练且与该流水线不兼容的问题，核心目标是构建一个<strong>无训练的图像编辑框架</strong>，在不冻结、不微调统一VLM主体的前提下，赋予其高保真的图像编辑能力，实现编辑指令与编辑结果的精准对齐，同时保持与GPT-4o生成流水线的兼容性。</p>

<h3 id="2-解决思路--理论推导">2. 解决思路 / 理论推导</h3>
<p>提出<strong>UniEdit-I无训练图像编辑框架</strong>，核心思路是基于GPT-4o的生成流水线，设计<strong>理解、编辑、验证三步迭代循环</strong>，全程无需对VLM及生成相关模块进行额外训练，仅通过流程化的迭代优化实现高保真图像编辑，三大核心阶段的设计如下：</p>
<ol>
  <li><strong>理解阶段（Understanding）</strong>：对源图像进行结构化语义分析，提取图像的关键语义信息并生成结构化的<strong>源提示词</strong>；根据用户的自然语言编辑指令，对源提示词进行<strong>最小化词汇替换</strong>，生成与编辑需求匹配的<strong>目标提示词</strong>，保证提示词与编辑意图的精准对齐，同时减少不必要的语义变更。</li>
  <li><strong>编辑阶段（Editing）</strong>：在扩散模型的去噪生成过程中，引入<strong>时间自适应偏移量</strong>，让模型在去噪的不同阶段实现<strong>从粗到细的连贯编辑</strong>——前期粗粒度匹配目标提示词的核心语义，后期细粒度优化图像的细节纹理，避免编辑过程中出现图像断裂、细节失真等问题。</li>
  <li><strong>验证阶段（Verifying）</strong>：将中间编辑图像与目标提示词进行语义与视觉的双重校验，自动计算<strong>图像-提示词一致性评分</strong>；若评分未达阈值，生成针对性的<strong>修正反馈</strong>并重新进入编辑阶段，若评分达标则提前终止迭代，实现编辑过程的自适应优化。</li>
</ol>

<h3 id="3-实验设置--实现细节">3. 实验设置 / 实现细节</h3>
<ul>
  <li><strong>模型基础</strong>：基于当前最新的统一视觉语言模型<strong>BLIP3-o</strong>搭建实验框架，遵循GPT-4o提出的<strong>理解型VLM-视觉特征-投影器-扩散模型</strong>流水线，冻结BLIP3-o的理解主体部分，不进行任何额外训练；</li>
  <li><strong>实验基准</strong>：在图像编辑领域的专用基准测试集<strong>GEdit-Bench</strong>上开展实验，验证模型的编辑性能；</li>
  <li><strong>评价维度</strong>：从<strong>编辑保真度</strong>、<strong>提示词-图像对齐度</strong>、<strong>图像细节完整性</strong>、<strong>编辑效率</strong>四个核心维度评估UniEdit-I的性能，同时与当前主流的有训练/无训练图像编辑方法进行对比；</li>
  <li><strong>核心验证</strong>：验证三步迭代流程的有效性，以及时间自适应偏移量、一致性评分机制对编辑效果的提升作用。</li>
</ul>

<h3 id="4-结果与分析">4. 结果与分析</h3>
<ol>
  <li><strong>SOTA性能验证</strong>：基于BLIP3-o实现的UniEdit-I在<strong>GEdit-Bench基准测试</strong>中取得了当前最优的性能，在编辑保真度、提示词-图像对齐度等核心指标上均显著优于对比方法；</li>
  <li><strong>无训练优势</strong>：UniEdit-I全程无需对VLM及生成模块进行微调训练，在实现高保真编辑的同时，完全保留了VLM原有的视觉理解能力，且大幅降低了模型部署与应用的成本；</li>
  <li><strong>迭代流程有效性</strong>：理解阶段的最小化词汇替换保证了编辑意图的精准传递，编辑阶段的时间自适应偏移量实现了从粗到细的连贯编辑，验证阶段的一致性评分机制有效提升了编辑效率，避免了无效迭代；</li>
  <li><strong>流水线兼容性</strong>：UniEdit-I完美兼容GPT-4o提出的统一VLM生成流水线，无需对原有流水线进行结构修改，仅通过流程化迭代即可赋予其图像编辑能力，具备极强的工程落地性。</li>
</ol>

<h2 id="参考资料">参考资料</h2>
<ul>
  <li><a href="https://arxiv.org/abs/2508.03142">UniEdit-I: Training-free Image Editing for Unified VLM via Iterative Understanding, Editing and Verifying (arXiv)</a></li>
  <li><a href="https://doi.org/10.48550/arXiv.2508.03142">UniEdit-I DOI:10.48550/arXiv.2508.03142</a></li>
</ul>]]></content><author><name>Jasper (Jintao Chen)</name><email>cjt@stu.pku.edu.cn</email></author><category term="无训练图像编辑" /><category term="Unified Model" /><category term="CVPR2026" /><summary type="html"><![CDATA[个人小结 FlowEdit当时被我们发现之后，并在Wan等模型上做了复现，发现效果出奇的好。当时的一条思路是沿着FlowEdit做一些运动迁移的任务，有了一些效果但是FlowEdit有着明显的无法做形状差异过大的编辑：比如把大象变成一只小狗，或者删除某个物体。但是如果有个生成语义token而不是像素级Token的DiT，这些问题似乎就可以迎刃而解了，比如Blip-3里面的生成clip token的DiT以及RAE、ScaleRAE等工作。并且语义级token的好处是可以无缝接入一个understanding expert，自动化评判编辑强度。]]></summary></entry><entry xml:lang="zh"><title type="html">[ICLR2026] NarrLV: Towards a Comprehensive Narrative-Centric Evaluation for Long Video Generation Models</title><link href="https://jasperchennn.github.io/narrlv-comprehensive-narrative-evaluation-long-video/" rel="alternate" type="text/html" title="[ICLR2026] NarrLV: Towards a Comprehensive Narrative-Centric Evaluation for Long Video Generation Models" /><published>2025-07-15T00:00:00+00:00</published><updated>2025-07-15T00:00:00+00:00</updated><id>https://jasperchennn.github.io/narrlv-zh</id><content type="html" xml:base="https://jasperchennn.github.io/narrlv-comprehensive-narrative-evaluation-long-video/"><![CDATA[<h3 id="个人小结">个人小结</h3>
<p>现在的视频生成似乎又个问题是，我给一段prompt：比如 “一个人先跑步、再跳、再走、再跑起来”， 对于这种具有多个TNA的时间，应该怎么合理的安排视频时间和事件的对应关系呢？值得思考的问题～(已经有了一些类似的工作比如SwitchCraft，但是更promissing的感觉是加在自回归式的pipeline里面)。</p>

<h2 id="摘要">摘要</h2>
<p>随着基础视频生成技术的快速发展，长视频生成模型凭借更广阔的内容创作空间展现出良好的研究潜力，长视频生成的核心目标不仅是延长视频时长，更要在长时视频中精准表达更丰富的叙事内容。但目前针对长视频生成模型的评估仍依赖于VBench等仅含简单叙事提示的基准，缺乏专用的评测体系。为此，本文提出首个全方位评估长视频生成模型叙事表达能力的基准<strong>NarrLV</strong>。该工作受电影叙事理论启发，首先提出视频中保持连续视觉呈现的基本叙事单元——时间叙事原子（TNA），并以其数量量化叙事丰富度，同时基于影响TNA变化的三大电影叙事核心要素构建自动提示词生成流水线，可生成TNA数量灵活扩展的评测提示词；其次，依据叙事内容表达的三个递进层次，基于多模态大模型（MLLM）的问答框架设计了高效的评估指标；最后对现有长视频生成模型及基础生成模型开展了大量评测。实验结果表明，NarrLV提出的指标与人类主观判断高度契合，评测结果也清晰揭示了当前视频生成模型在叙事内容表达上的能力边界。</p>

<h2 id="方法--内容主体">方法 / 内容主体</h2>

<h3 id="1-问题定义">1. 问题定义</h3>
<p>本研究聚焦<strong>长视频生成模型的评测难题</strong>，核心目标是解决现有评测基准适配性差、缺乏叙事针对性、评估指标单一的问题，构建一个<strong>全面、客观、与人类主观判断对齐</strong>的叙事中心式长视频生成评估体系，实现对长视频生成模型叙事丰富度、连贯性、表达准确性的量化评估，同时明确当前模型在叙事表达上的能力边界。</p>

<h3 id="2-解决思路--理论推导">2. 解决思路 / 理论推导</h3>
<p>受电影叙事理论启发，提出NarrLV评估基准，核心思路是<strong>从叙事单元定义、评测数据构建、评估指标设计三个维度</strong>，打造全方位的长视频叙事评估体系，整体设计围绕<strong>时间叙事原子（TNA）</strong> 这一核心概念展开，具体包括三部分：</p>
<ol>
  <li><strong>定义时间叙事原子（Temporal Narrative Atom, TNA）</strong>：将视频中<strong>保持连续视觉呈现的基本叙事单元</strong>定义为TNA，以TNA的数量作为长视频叙事丰富度的量化指标，为叙事评估提供可落地的基本单位；</li>
  <li><strong>构建TNA可控的自动提示词生成流水线</strong>：基于影响TNA变化的<strong>三大电影叙事核心要素</strong>设计提示词生成规则，可灵活生成不同TNA数量的评测提示词，满足对不同叙事丰富度要求的长视频评测需求；</li>
  <li><strong>设计MLLM驱动的递进式叙事评估指标</strong>：依据<strong>叙事内容表达的三个递进层次</strong>，搭建多模态大模型（MLLM）的自动问答评测框架，通过模型对长视频的叙事理解与问答反馈，实现对叙事连贯性、表达准确性的量化评估。</li>
</ol>

<h3 id="3-实验设置--实现细节">3. 实验设置 / 实现细节</h3>
<ul>
  <li><strong>评测对象</strong>：选取当前主流的<strong>专用长视频生成模型</strong>与经典的<strong>基础视频生成模型</strong>作为评测对象，覆盖不同架构、不同训练范式的模型类型，保证评测结果的全面性；</li>
  <li><strong>评测数据</strong>：使用自研的自动提示词生成流水线，生成包含不同TNA数量、不同叙事复杂度的长视频生成提示词集，作为统一的评测输入；</li>
  <li><strong>评估维度</strong>：从<strong>叙事丰富度</strong>（TNA数量）、<strong>叙事连贯性</strong>、<strong>叙事表达准确性</strong>三个核心维度开展评测，同时对比模型生成视频的视觉质量，实现多维度综合评估；</li>
  <li><strong>验证方式</strong>：将NarrLV的自动评测结果与<strong>人类主观评测结果</strong>进行对比，验证指标的有效性与一致性；同时开展消融实验，验证TNA定义、MLLM问答框架对评测结果的影响。</li>
</ul>

<h3 id="4-结果与分析">4. 结果与分析</h3>
<ol>
  <li><strong>指标有效性</strong>：NarrLV提出的评估指标与人类主观判断<strong>高度契合</strong>，在叙事连贯性、表达准确性的评测上，自动评测结果与人类评分的相关性显著高于现有评测指标，验证了指标的科学性；</li>
  <li><strong>模型能力边界</strong>：评测结果清晰揭示了当前长视频生成模型的叙事表达短板——多数模型在低TNA数量的简单叙事场景表现良好，但在高TNA数量的复杂叙事场景中，易出现叙事断裂、逻辑混乱、内容遗漏等问题；</li>
  <li><strong>视觉与叙事的失衡</strong>：部分模型虽能保证生成视频的视觉质量，但在叙事表达上存在明显缺陷，证明了仅以视觉质量评估长视频生成模型的片面性；</li>
  <li><strong>基准的扩展性</strong>：NarrLV的自动提示词生成流水线支持TNA数量的灵活扩展，可适配不同叙事复杂度的评测需求，同时MLLM驱动的评估框架具备良好的通用性，可迁移到不同类型的视频生成模型评测中。</li>
</ol>

<h2 id="参考资料">参考资料</h2>
<ul>
  <li><a href="https://arxiv.org/abs/2507.11245">NarrLV: Towards a Comprehensive Narrative-Centric Evaluation for Long Video Generation Models (arXiv)</a></li>
</ul>]]></content><author><name>Jasper (Jintao Chen)</name><email>cjt@stu.pku.edu.cn</email></author><category term="长视频生成" /><category term="BenchMark" /><category term="多模态大模型" /><category term="ICLR2026" /><summary type="html"><![CDATA[个人小结 现在的视频生成似乎又个问题是，我给一段prompt：比如 “一个人先跑步、再跳、再走、再跑起来”， 对于这种具有多个TNA的时间，应该怎么合理的安排视频时间和事件的对应关系呢？值得思考的问题～(已经有了一些类似的工作比如SwitchCraft，但是更promissing的感觉是加在自回归式的pipeline里面)。]]></summary></entry><entry xml:lang="en"><title type="html">[CVPR2025]Decouple Distortion from Perception: Region Adaptive Diffusion for Extreme-low Bitrate Perception Image Compression</title><link href="https://jasperchennn.github.io/en/region-adaptive-diffusion-extreme-low-bitrate-compression/" rel="alternate" type="text/html" title="[CVPR2025]Decouple Distortion from Perception: Region Adaptive Diffusion for Extreme-low Bitrate Perception Image Compression" /><published>2025-06-17T00:00:00+00:00</published><updated>2025-06-17T00:00:00+00:00</updated><id>https://jasperchennn.github.io/en/region-adaptive-diffusion-compression-en</id><content type="html" xml:base="https://jasperchennn.github.io/en/region-adaptive-diffusion-extreme-low-bitrate-compression/"><![CDATA[<h2 id="abstract">Abstract</h2>
<p>Generative image compression leverages the generative capabilities of diffusion models to achieve excellent perceptual fidelity at extreme-low bitrates. However, existing methods overlook the non-uniform complexity of images, making it difficult to balance global perceptual quality with local texture consistency and to achieve efficient allocation of coding resources. To address this issue, this paper proposes the Map-guided Masked Realistic Image Diffusion Codec (MRIDC), which aims to optimize the trade-off between local distortion and global perceptual quality in extreme-low bitrate compression. MRIDC integrates a vector-quantized image encoder with a diffusion-based decoder. At the encoding stage, the Map-guided Latent Masking (MLM) module enables adaptive resource allocation based on image complexity. At the decoding stage, the Bidirectional Prediction Controllable Generation (BPCG) module completes masked latent variables and reconstructs images. Experimental results demonstrate that MRIDC achieves state-of-the-art (SOTA) perceptual compression quality at extreme-low bitrates, effectively preserving feature consistency in key regions, advancing the perceptual rate-distortion performance curve, and establishing a new benchmark for balancing compression efficiency and visual fidelity.</p>

<h2 id="introduction--background">Introduction / Background</h2>
<p>In scenarios such as the Internet of Things (IoT), edge computing, and real-time visual transmission, <strong>extreme-low bitrate image compression</strong> has become a core requirement. It not only needs to achieve high compression ratios to save transmission and storage resources but also ensure the perceptual quality of images and feature consistency in key regions. Generative diffusion models have brought new breakthroughs to extreme-low bitrate image compression, significantly improving the perceptual fidelity of compressed images. However, existing diffusion-based compression methods have a core flaw: <strong>treating images as uniform entities for encoding and reconstruction, ignoring the complexity differences in different regions of images</strong>.</p>

<p>This flaw leads to unbalanced allocation of coding resources—simple regions occupy excessive resources while complex key regions lack sufficient resources. Ultimately, this results in acceptable global perceptual quality but local texture distortion and loss of key features, making it difficult to meet the requirements of visual perception tasks for image details. Meanwhile, there is still room for optimization in the rate-distortion perception trade-off of existing methods, and the precise balance between compression efficiency and visual fidelity has not yet been achieved.</p>

<p>Targeting researchers, engineers, and practitioners in the fields of image compression, computer vision, and multimedia processing, this paper proposes a region-adaptive diffusion-based image compression framework to solve the problems of uneven resource allocation and local distortion in existing methods, providing a new solution for extreme-low bitrate perceptual image compression. This work is published at <strong>CVPR 2025</strong>, representing a top conference research achievement in the field of generative image compression.</p>

<h2 id="methodology--main-content">Methodology / Main Content</h2>

<h3 id="1-problem-definition">1. Problem Definition</h3>
<p>This research focuses on the <strong>perceptual image compression problem at extreme-low bitrates</strong>. The core objective is to address the issues of inefficient coding resource allocation, local texture distortion, and inconsistent key region features in existing diffusion-based generative compression methods caused by ignoring the non-uniform complexity of images. We aim to achieve triple optimization of <strong>global perceptual quality</strong>, <strong>local texture consistency</strong>, and <strong>compression efficiency</strong>, thereby improving the comprehensive perceptual rate-distortion performance.</p>

<h3 id="2-solution-approach--theoretical-derivation">2. Solution Approach / Theoretical Derivation</h3>
<p>We propose the <strong>Map-guided Masked Realistic Image Diffusion Codec (MRIDC)</strong>, whose core idea is to decouple distortion and perceptual quality in image compression through <strong>region-adaptive coding resource allocation</strong> and <strong>constrained diffusion reconstruction</strong>, achieving a precise trade-off between them. The overall architecture is a joint design of a <strong>vector-quantized encoder + diffusion-based decoder</strong>, with core modules including:</p>
<ol>
  <li><strong>Map-guided Latent Masking (MLM) Module (Encoding Stage)</strong>: Based on prior information of image complexity, selectively masks the latent space to retain more latent variable information for complex/key regions and mask more redundant information for simple regions, realizing adaptive allocation of coding resources and improving resource utilization efficiency;</li>
  <li><strong>Bidirectional Prediction Controllable Generation (BPCG) Module (Decoding Stage)</strong>: Adds constraint guidance to the generation process of the diffusion model, bidirectionally predicts and completes masked latent variables based on unmasked latent variable information, achieves constrained image reconstruction, and ensures local texture consistency and fidelity of key features.</li>
</ol>

<figure>
  <img src="/images/posts/region-adaptive-diffusion-compression-20250617/overview.png" alt="简短说明" style="max-width:100%;" />
  <figcaption>MRIDC整体框架</figcaption>
</figure>

<h3 id="3-experimental-setup--implementation-details">3. Experimental Setup / Implementation Details</h3>
<ul>
  <li><strong>Experimental Benchmarks</strong>: Experiments are conducted on mainstream public datasets for extreme-low bitrate image compression, comparing with current SOTA generative image compression methods and traditional compression methods;</li>
  <li><strong>Evaluation Metrics</strong>: Comprehensive evaluation from three dimensions: <strong>perceptual quality</strong> (e.g., LPIPS, SSIM, subjective MOS scores), <strong>rate-distortion performance</strong> (RD curves), and <strong>key region feature consistency</strong>;</li>
  <li><strong>Core Verification</strong>: Verify the region-adaptive resource allocation effect, local texture reconstruction capability, and generalization performance of MRIDC at different extreme-low bitrates.</li>
</ul>

<h3 id="4-results-and-analysis">4. Results and Analysis</h3>
<ol>
  <li><strong>SOTA Performance Verification</strong>: MRIDC achieves <strong>state-of-the-art perceptual compression quality</strong> at extreme-low bitrates, significantly outperforming comparison methods in objective metrics such as LPIPS, SSIM, and subjective MOS scores;</li>
  <li><strong>Key Region Fidelity</strong>: Through region-adaptive resource allocation and constrained reconstruction, the model effectively preserves feature consistency in key image regions, solving the problems of local texture distortion and feature loss in existing methods;</li>
  <li><strong>Rate-Distortion Perception Optimization</strong>: The model significantly <strong>advances the perceptual rate-distortion performance curve</strong>, achieving higher perceptual quality at the same bitrate and lower bitrate at the same perceptual quality, establishing a new industry benchmark for balancing compression efficiency and visual fidelity;</li>
  <li><strong>Module Effectiveness</strong>: Ablation experiments verify the necessity of the core modules MLM and BPCG. Removing either module leads to decreased resource allocation efficiency, reduced perceptual quality, and lower local consistency.</li>
</ol>

<figure>
  <img src="/images/posts/region-adaptive-diffusion-compression-20250617/results.png" alt="简短说明" style="max-width:100%;" />
  <figcaption>results</figcaption>
</figure>

<h2 id="conclusion-and-outlook">Conclusion and Outlook</h2>

<h3 id="key-contributions">Key Contributions</h3>
<ol>
  <li>Proposed the <strong>Map-guided Masked Realistic Image Diffusion Codec (MRIDC)</strong>, integrating <strong>region-adaptive resource allocation</strong> into diffusion-based generative image compression for the first time, decoupling distortion and perceptual quality in compression, and solving the core problem of uneven resource allocation in existing methods;</li>
  <li>Designed dedicated modules MLM (encoding stage) and BPCG (decoding stage), realizing end-to-end optimization from latent variable masking to constrained reconstruction, ensuring dual improvement of global perceptual quality and local texture consistency at extreme-low bitrates;</li>
  <li>Published at CVPR 2025, MRIDC achieves SOTA performance in extreme-low bitrate perceptual image compression, advancing the perceptual rate-distortion performance curve and establishing a new benchmark for balancing compression efficiency and visual fidelity.</li>
</ol>

<h3 id="personal-notes">Personal Notes</h3>
<p>It was not noticed at that time that a series of subsequent visual tokenizers all adopted similar dual-encoder structures. Unfortunately, only compression and reconstruction were explored at that time, without investigating the generation aspect.</p>

<h2 id="references">References</h2>
<ul>
  <li><a href="https://openaccess.thecvf.com/content/CVPR2025/html/Xu_Decouple_Distortion_from_Perception_Region_Adaptive_Diffusion_for_Extreme-low_Bitrate_CVPR_2025_paper.html">Decouple Distortion from Perception (CVPR 2025 OpenAccess)</a></li>
  <li><a href="https://cvpr.thecvf.com/2025/">CVPR 2025 Conference Proceedings</a></li>
</ul>]]></content><author><name>Jasper (Jintao Chen)</name><email>cjt@stu.pku.edu.cn</email></author><category term="Image Compression" /><category term="Diffusion Model" /><category term="Extreme-low Bitrate" /><category term="Perceptual Quality" /><category term="Region Adaptive" /><category term="CVPR2025" /><summary type="html"><![CDATA[Abstract Generative image compression leverages the generative capabilities of diffusion models to achieve excellent perceptual fidelity at extreme-low bitrates. However, existing methods overlook the non-uniform complexity of images, making it difficult to balance global perceptual quality with local texture consistency and to achieve efficient allocation of coding resources. To address this issue, this paper proposes the Map-guided Masked Realistic Image Diffusion Codec (MRIDC), which aims to optimize the trade-off between local distortion and global perceptual quality in extreme-low bitrate compression. MRIDC integrates a vector-quantized image encoder with a diffusion-based decoder. At the encoding stage, the Map-guided Latent Masking (MLM) module enables adaptive resource allocation based on image complexity. At the decoding stage, the Bidirectional Prediction Controllable Generation (BPCG) module completes masked latent variables and reconstructs images. Experimental results demonstrate that MRIDC achieves state-of-the-art (SOTA) perceptual compression quality at extreme-low bitrates, effectively preserving feature consistency in key regions, advancing the perceptual rate-distortion performance curve, and establishing a new benchmark for balancing compression efficiency and visual fidelity.]]></summary></entry><entry xml:lang="zh"><title type="html">[CVPR2025]Decouple Distortion from Perception: Region Adaptive Diffusion for Extreme-low Bitrate Perception Image Compression</title><link href="https://jasperchennn.github.io/region-adaptive-diffusion-extreme-low-bitrate-compression/" rel="alternate" type="text/html" title="[CVPR2025]Decouple Distortion from Perception: Region Adaptive Diffusion for Extreme-low Bitrate Perception Image Compression" /><published>2025-06-17T00:00:00+00:00</published><updated>2025-06-17T00:00:00+00:00</updated><id>https://jasperchennn.github.io/region-adaptive-diffusion-compression-zh</id><content type="html" xml:base="https://jasperchennn.github.io/region-adaptive-diffusion-extreme-low-bitrate-compression/"><![CDATA[<h3 id="个人小结">个人小结</h3>
<p>当时没注意到，后面一系列的visual tokenizer的自回归工作都用到了类似的双编码器结构，可惜当时只探究了压缩重建，没有探索生成的方面。</p>

<p>code base参考了PerCo、MaskGiT、ControlNet。</p>

<h2 id="摘要">摘要</h2>
<p>生成式图像压缩借助扩散模型的生成能力，在极低码率下已实现优异的感知保真度，但现有方法忽略了图像的非均匀复杂度，难以平衡全局感知质量与局部纹理一致性，也无法实现编码资源的高效分配。为此，本文提出地图引导掩码真实感图像扩散编解码器（MRIDC），旨在优化极低码率压缩中局部失真与全局感知质量的权衡关系。MRIDC整合了向量量化图像编码器与扩散基解码器，在编码端设计地图引导潜变量掩码（MLM）模块实现基于图像复杂度的自适应资源分配，解码端通过双向预测可控生成（BPCG）模块完成掩码潜变量的补全与图像重建。实验结果表明，MRIDC在极低码率下取得了 sota 级的感知压缩质量，有效保留了关键区域的特征一致性，推动了率失真感知性能曲线的提升，为平衡压缩效率与视觉保真度建立了新基准。</p>

<h2 id="方法--内容主体">方法 / 内容主体</h2>

<h3 id="1-问题定义">1. 问题定义</h3>
<p>本研究聚焦<strong>极低码率下的感知图像压缩问题</strong>，核心目标是解决现有扩散基生成式压缩方法因忽略图像非均匀复杂度，导致的编码资源分配低效、局部纹理失真、关键区域特征不一致等问题，实现<strong>全局感知质量</strong>、<strong>局部纹理一致性</strong>与<strong>压缩效率</strong>的三重优化，提升率失真感知综合性能。</p>

<h3 id="2-解决思路--理论推导">2. 解决思路 / 理论推导</h3>
<p>提出<strong>地图引导掩码真实感图像扩散编解码器（MRIDC）</strong>，核心思路是通过<strong>区域自适应的编码资源分配</strong>与<strong>约束性的扩散重建</strong>，解耦图像压缩中的失真与感知质量，实现二者的精准权衡，整体架构为<strong>向量量化编码器 + 扩散基解码器</strong>的联合设计，核心模块包括：</p>
<ol>
  <li><strong>地图引导潜变量掩码（MLM）模块（编码端）</strong>：基于图像复杂度先验信息，对潜变量空间进行选择性掩码，为复杂/关键区域保留更多潜变量信息，为简单区域掩码更多冗余信息，实现编码资源的自适应分配，提升资源利用效率；</li>
  <li><strong>双向预测可控生成（BPCG）模块（解码端）</strong>：在扩散模型的生成过程中加入约束引导，基于未掩码的潜变量信息，双向预测补全掩码区域的潜变量，实现受约束的图像重建，保证局部纹理一致性与关键特征的保真度。</li>
</ol>

<figure>
  <img src="/images/posts/region-adaptive-diffusion-compression-20250617/overview.png" alt="简短说明" style="max-width:100%;" />
  <figcaption>MRIDC整体框架</figcaption>
</figure>

<h3 id="3-实验设置--实现细节">3. 实验设置 / 实现细节</h3>
<ul>
  <li><strong>实验基准</strong>：在极低码率图像压缩的主流公开数据集上开展实验，对比当前 sota 级的生成式图像压缩方法与传统压缩方法；</li>
  <li><strong>评价指标</strong>：从<strong>感知质量</strong>（如LPIPS、SSIM、MOS主观评分）、<strong>率失真性能</strong>（RD曲线）、<strong>关键区域特征一致性</strong>三个维度进行全面评估；</li>
  <li><strong>核心验证</strong>：验证MRIDC的区域自适应资源分配效果、局部纹理重建能力，以及在不同极低码率下的泛化性能。</li>
</ul>

<h3 id="4-结果与分析">4. 结果与分析</h3>
<ol>
  <li><strong>SOTA 性能验证</strong>：MRIDC在极低码率下取得了<strong>当前最优的感知压缩质量</strong>，在LPIPS、SSIM等客观指标及主观MOS评分上均显著优于对比方法；</li>
  <li><strong>关键区域保真性</strong>：通过区域自适应的资源分配与约束性重建，模型有效保留了图像关键区域的特征一致性，解决了现有方法局部纹理失真、特征丢失的问题；</li>
  <li><strong>率失真感知优化</strong>：模型显著<strong>推动了率失真感知性能曲线的提升</strong>，在相同码率下实现更高的感知质量，在相同感知质量下实现更低的码率，为平衡压缩效率与视觉保真度建立了新的行业基准；</li>
  <li><strong>模块有效性</strong>：消融实验验证了MLM与BPCG核心模块的必要性，移除任一模块均会导致资源分配效率下降、感知质量与局部一致性降低。</li>
</ol>

<figure>
  <img src="/images/posts/region-adaptive-diffusion-compression-20250617/results.png" alt="简短说明" style="max-width:100%;" />
  <figcaption>定量结果</figcaption>
</figure>

<figure>
  <img src="/images/posts/region-adaptive-diffusion-compression-20250617/mask_type.png" alt="简短说明" style="max-width:100%;" />
  <figcaption>不同的区域自适应MASK方法</figcaption>
</figure>

<figure>
  <img src="/images/posts/region-adaptive-diffusion-compression-20250617/ablation.png" alt="简短说明" style="max-width:100%;" />
  <figcaption>消融定性结构</figcaption>
</figure>

<h2 id="参考资料">参考资料</h2>
<ul>
  <li><a href="https://openaccess.thecvf.com/content/CVPR2025/html/Xu_Decouple_Distortion_from_Perception_Region_Adaptive_Diffusion_for_Extreme-low_Bitrate_CVPR_2025_paper.html">Decouple Distortion from Perception (CVPR 2025 OpenAccess)</a></li>
</ul>]]></content><author><name>Jasper (Jintao Chen)</name><email>cjt@stu.pku.edu.cn</email></author><category term="图像压缩" /><category term="扩散生成" /><category term="极低码率" /><category term="感知质量" /><category term="区域自适应" /><category term="CVPR2025" /><summary type="html"><![CDATA[个人小结 当时没注意到，后面一系列的visual tokenizer的自回归工作都用到了类似的双编码器结构，可惜当时只探究了压缩重建，没有探索生成的方面。]]></summary></entry></feed>