Skip to content

Commit 2e5f98d

Browse files
committed
update abstract
1 parent 3ae791d commit 2e5f98d

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

src/constants.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ export const RESEARCH_DATA: ProjectData = {
1313
{ name: "Tianfan Xue", affiliation: "CUHK", url: "#" },
1414
{ name: "Shi Guo", affiliation: "Shanghai AI Laboratory", url: "#" }
1515
],
16-
abstract: "This is a placeholder for the abstract. Replace this text with a concise summary of your research. It should cover the problem statement, the core methodology proposed, and the key experimental results. Usually, this section is kept between 150-250 words to provide a quick overview for the reader.",
16+
abstract: "3D reconstruction methods such as 3D Gaussian Splatting (3DGS) and Neural Radiance Fields (NeRF) achieve impressive photorealism but fail when input images suffer from severe motion blur. While event cameras provide high-temporal-resolution motion cues, existing event-assisted approaches rely on low-resolution sensors and strict synchronization, limiting their practicality for handheld 3D capture on common devices, such as smartphones. We introduce a flexible, high-resolution asynchronous RGB–Event dual-camera system and a corresponding reconstruction framework. Our approach first reconstructs sharp images from the event data and then employs a cross-domain pose estimation module based on the Visual Geometry Transformer (VGGT) to obtain robust initialization for 3DGS. During optimization, we employ a structure-driven event loss and view-specific consistency regularizers to mitigate the ill-posed behavior of traditional event losses and deblurring losses, ensuring both stable and high-fidelity reconstruction. We further contribute AsyncEv-Deblur, a new high-resolution RGB–Event dataset captured with our asynchronous system. Experiments demonstrate that our method achieves state-of-the-art performance on both our challenging dataset and existing benchmarks, substantially improving reconstruction robustness under severe motion blur.",
1717
links: [
1818
{ label: "Paper", url: "#", icon: "pdf" },
1919
{ label: "Code", url: "#", icon: "github" },

0 commit comments

Comments
 (0)