-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
434 lines (407 loc) · 23 KB
/
index.html
File metadata and controls
434 lines (407 loc) · 23 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
<!DOCTYPE HTML>
<style>
#full {
display: none;
}
</style>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Ziyi Wang</title>
<meta name="author" content="Ziyi Wang">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images/icon.png">
</head>
<body>
<table style="width:100%;max-width:850px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:60%;vertical-align:middle">
<p style="text-align:center">
<name>Ziyi Wang</name>
</p>
<p>
I am a fifth year PhD student in the Department of Automation at Tsinghua University, advised by Prof. <a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>.
In 2020, I obtained my B.Eng. in the Department of Electronic Engineering, Tsinghua University.
I also obtained B.Admin. as dual degree in the School of Ecnomics and Management, Tsinghua University.
</p>
<p>
I am broadly interested in computer vision and deep learning. My current research focuses on 3D vision, 3D generation and 4D world model.
</p>
<p style="text-align:center">
<a href="mailto:wziyi22@mails.tsinghua.edu.cn">Email</a>  / 
<a href="https://scholar.google.com/citations?user=DYHPUXUAAAAJ&hl=en&oi=ao"> Google Scholar</a>  / 
<a href="https://github.com/wangzy22"> Github </a>
</p>
</td>
<td style="padding:2.5%;width:30%;max-width:30%">
<img style="width:50%;max-width:50%" alt="profile photo" src="images/wzy.jpeg">
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>News</heading>
<p>
<li style="margin: 5px;" >
<b>2025-06:</b> 1 survey paper on vision generalist model is accepted to <a href="https://link.springer.com/journal/11263">IJCV</a>.
</li>
<li style="margin: 5px;" >
<b>2025-02:</b> 1 paper on unified 3D point cloud pre-training is accepted to <a href="https://neurips.cc">CVPR 2025</a>.
</li>
<li style="margin: 5px;" >
<b>2024-09:</b> 1 paper on 3D open vocabulary semantic segmentation is accepted to <a href="https://neurips.cc">NeurIPS 2024</a>.
</li>
<li style="margin: 5px;" >
<b>2024-01:</b> The journal paper of <a href="https://arxiv.org/abs/2208.02812">P2P</a> is accepted to <a href="https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=34">TPAMI</a>.
</li>
<li style="margin: 5px;" >
<b>2023-07:</b> 1 paper on 3D generative pre-training is accepted to <a href="https://iccv2023.thecvf.com">ICCV 2023</a>.
</li>
<li style="margin: 5px;" >
<b>2023-07:</b> The journal paper of <a href="https://arxiv.org/abs/2012.00987">PV-RAFT</a> is accepted to <a href="https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=34">TPAMI</a>.
</li>
<li style="margin: 5px;" >
<b>2022-09:</b> 1 paper (spotlight) on 3D prompt learning is accepted to <a href="https://neurips.cc/Conferences/2022">NeurIPS 2022</a>.
</li>
<li style="margin: 5px;" >
<b>2022-03:</b> 1 paper on 3D semantic segmentation is accepted to <a href="https://cvpr2022.thecvf.com/">CVPR 2022</a>.
</li>
<li style="margin: 5px;" >
<b>2021-07:</b> 2 papers (including 1 oral) are accepted to <a href="http://iccv2021.thecvf.com/">ICCV 2021</a>.
</li>
<li style="margin: 5px;" >
<b>2021-03:</b> 1 paper on 3D scene flow estimation is accepted to <a href="http://cvpr2021.thecvf.com/">CVPR 2021</a>.
</li>
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<p><heading>Publications</heading></p>
<p>
* indicates equal contribution
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/VGM.png" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>Vision Generalist Model: A Survey</papertitle>
<br>
<strong>Ziyi Wang</strong>,
<a href="https://raoyongming.github.io">Yongming Rao</a>,
Shuofeng Sun,
Xinrun Liu,
<a href="https://weiyithu.github.io">Yi Wei</a>,
<a href="https://yuxumin.github.io/">Xumin Yu</a>,
<a href="https://scholar.google.com/citations?user=7npgHqAAAAAJ&hl=en">Zuyan Liu</a>,
<a href="https://yanbo-23.github.io">Yanbo Wang</a>,
Hongmin Liu,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>
<br>
<em>International Journal of Computer Vision (<strong>IJCV</strong>)</em>, 2025
<br>
<a href="https://arxiv.org/abs/2506.09954">[arXiv]</a>
<br>
<p> We conduct a comprehensive survey on vision generalist models that support multimodal inputs and can handle various downstream tasks.</p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/OGGSplat.png" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>OGGSplat: Open Gaussian Growing for Generalizable Reconstruction with Expanded Field-of-View</papertitle>
<br>
<a href="https://yanbo-23.github.io">Yanbo Wang*</a>,
<strong>Ziyi Wang</strong>*,
<a href="https://wzzheng.net">Wenzhao Zheng</a>,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>
<br>
<em>Preprint</em>.
<br>
<a href="https://arxiv.org/abs/2506.05204">[arXiv]</a>
<a href="https://github.com/Yanbo-23/OGGSplat">[Code]</a>
<a href="https://yanbo-23.github.io/OGGSplat">[Project Page]</a>
<br>
<p> OGGSplat is designed to expand the field-of-view of the Gaussian-based 3D scene reconstructed from sparse views and feedforward / generalizable models. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/UniPre3D.png" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>UniPre3D: Unified Pre-training of 3D Point Cloud Models with Cross-Modal Gaussian Splatting</papertitle>
<br>
<strong>Ziyi Wang</strong>*,
<a href="https://github.com/Zhangyr2022">Yanran Zhang*</a>,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>
<br>
<em>IEEE/CVF Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>)</em>, 2025
<br>
<a href="https://arxiv.org/abs/2506.09952">[arXiv]</a>
<a href="https://github.com/wangzy22/UniPre3D">[Code]</a>
<br>
<p> UniPre3D is a unified pre-training method that can be applied to both object-level and scene-level point clouds. It is supported by cross-modal Gaussian splatting technique. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/XMask3D.png" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>XMask3D: Cross-modal Mask Reasoning for Open Vocabulary 3D Semantic Segmentation</papertitle>
<br>
<strong>Ziyi Wang</strong>*,
<a href="https://yanbo-23.github.io">Yanbo Wang*</a>,
<a href="https://yuxumin.github.io/">Xumin Yu</a>,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>
<br>
<em>Conference on Neural Information Processing Systems (<strong>NeurIPS</strong>)</em>, 2024
<br>
<a href="https://arxiv.org/abs/2411.13243">[arXiv]</a>
<a href="https://github.com/wangzy22/XMask3D">[Code]</a>
<br>
<p> XMask3D is a framework that propose mask-level reasoning techniques to empower 3D segmentation model with open vocabulary capacity under the assistance of the pre-trained 2D mask generator. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/P2P++.png" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>Point-to-Pixel Prompting for Point Cloud Analysis With Pre-Trained Image Models</papertitle>
<br>
<strong>Ziyi Wang</strong>,
<a href="https://raoyongming.github.io">Yongming Rao</a>,
<a href="https://yuxumin.github.io/">Xumin Yu</a>,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>
<br>
<em>IEEE Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>)</em>, 2024
<br>
<a href="https://ieeexplore.ieee.org/document/10400940">[IEEE]</a>
<a href="https://github.com/wangzy22/P2P">[Code]</a>
<a href="https://p2p.ivg-research.xyz/">[Project Page]</a>
<br>
<p> P2P++ is the extended journal version of <a href="https://arxiv.org/abs/2208.02812">P2P</a>. We further propose Pixel-to-Point Distillation to make P2P applicable in scene-level perception tasks. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/DPV-RAFT.png" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>3D Point-Voxel Correlation Fields for Scene Flow Estimation</papertitle>
<br>
<strong>Ziyi Wang</strong>*,
<a href="https://weiyithu.github.io">Yi Wei</a>*,
<a href="https://raoyongming.github.io">Yongming Rao</a>,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>
<br>
<em>IEEE Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>)</em>, 2023
<br>
<a href="https://ieeexplore.ieee.org/document/10178057">[IEEE]</a>
<a href="https://github.com/weiyithu/PV-RAFT">[Code]</a>
<a href="https://pvraft.ivg-research.xyz">[Project Page]</a>
<br>
<p> DPV-RAFT is the extended journal version of <a href="https://arxiv.org/abs/2012.00987">PV-RAFT</a>. We further propose Spatial Deformation and Temporal Deformation to enhance PV-RAFT. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/tap.png" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>Take-A-Photo: 3D-to-2D Generative Pre-training of Point Cloud Models</papertitle>
<br>
<strong>Ziyi Wang</strong>*,
<a href="https://yuxumin.github.io/">Xumin Yu</a>*,
<a href="https://raoyongming.github.io">Yongming Rao</a>,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>
<br>
<em>IEEE International Conference on Computer Vision (<strong>ICCV</strong>)</em>, 2023
<br>
<a href="https://arxiv.org/abs/2307.14971">[arXiv]</a>
<a href="https://github.com/wangzy22/TAP">[Code]</a>
<a href="http://tap.ivg-research.xyz">[Project Page]</a>
<br>
<p> TAP is a 3D-to-2D generative pre-training method that generate projected images of point clouds from instructed perspectives. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/p2p.png" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting</papertitle>
<br>
<strong>Ziyi Wang</strong>*,
<a href="https://yuxumin.github.io/">Xumin Yu</a>*,
<a href="https://raoyongming.github.io">Yongming Rao</a>*,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>
<br>
<em>Conference on Neural Information Processing Systems (<strong>NeurIPS</strong>)</em>, 2022
<br>
<font color="red"><strong>Spotlight</strong></font>
<br>
<a href="http://arxiv.org/abs/2208.02812">[arXiv]</a>
<a href="https://github.com/wangzy22/P2P">[Code]</a>
<a href="https://p2p.ivg-research.xyz/">[Project Page]</a>
<a href="https://zhuanlan.zhihu.com/p/558286235">[中文解读]</a>
<br>
<p> P2P is a framework to leverage large-scale pre-trained image models for 3D point cloud analysis. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/SemAffine.png" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>SemAffiNet: Semantic-Affine Transformation for Point Cloud Segmentation</papertitle>
<br>
<strong>Ziyi Wang</strong>,
<a href="https://raoyongming.github.io">Yongming Rao</a>,
<a href="https://yuxumin.github.io/">Xumin Yu</a>,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>
<br>
<em>IEEE/CVF Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>)</em>, 2022
<br>
<a href="http://arxiv.org/abs/2205.13490">[arXiv]</a>
<a href="https://github.com/wangzy22/SemAffiNet">[Code]</a>
<br>
<p> We present Semantic-Affine Transformation that transforms decoder mid-level features of the encoder-decoder segmentation network with class-specific affine parameters.</p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/PoinTr.gif" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers</papertitle>
<br>
<a href="https://yuxumin.github.io/">Xumin Yu</a>*,
<a href="https://raoyongming.github.io">Yongming Rao</a>*,
<strong>Ziyi Wang</strong>, Zuyan Liu,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>
<br>
<em>IEEE International Conference on Computer Vision (<strong>ICCV</strong>)</em>, 2021
<br>
<font color="red"><strong>Oral Presentation</strong></font>
<br>
<a href="https://arxiv.org/abs/2108.08839">[arXiv]</a>
<a href="https://github.com/yuxumin/PoinTr/">[Code]</a>
<a href="https://zhuanlan.zhihu.com/p/401928647">[中文解读]</a>
<br>
<p> PoinTr is a transformer-based framework that reformulates point cloud completion as a set-to-set translation problem. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/DIML.gif" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle> Towards Interpretable Deep Metric Learning with Structural Matching
</papertitle>
<br>
<a href="https://wl-zhao.github.io/"> Wenliang Zhao</a>*,
<a href="https://raoyongming.github.io">Yongming Rao</a>*,
<strong>Zyi Wang</strong>,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>,
<a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>
<br>
<em>IEEE International Conference on Computer Vision (<strong>ICCV</strong>)</em>, 2021
<br>
<a href="https://arxiv.org/abs/2108.05889">[arXiv]</a> <a href="https://github.com/wl-zhao/DIML">[Code]</a>
<br>
<p> We present a deep interpretable metric learning (DIML) that adopts a structural matching strategy to explicitly aligns the spatial embeddings by computing an optimal matching flow between feature maps of the two images. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/PV_RAFT.jpg" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds</papertitle>
<br>
<a href="https://weiyithu.github.io"> Yi Wei </a>*,
<strong>Ziyi Wang*</strong>,
<a href="https://raoyongming.github.io">Yongming Rao</a>*,
<a href="http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/"> Jiwen Lu </a>, <a href="https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en&authuser=1"> Jie Zhou </a>
<br>
<em>IEEE/CVF Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>)</em>, 2021
<br>
<a href="https://arxiv.org/abs/2012.00987">[arXiv]</a> <a href="https://github.com/weiyithu/PV-RAFT">[Code]</a>
<br>
<p></p>
<p> We present point-voxel correlation fields for 3D scene flow estimation which migrates the high performance of RAFT and provides a solution to build structured all-pairs correlation fields for unstructured point clouds. </p>
</td>
</tr>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Teaching</heading>
<p>
<li style="margin: 5px;"> Teaching Assistant, Computer Vision, 2024 Spring Semester</li>
<li style="margin: 5px;"> Teaching Assistant, Pattern Recognition and Machine Learning, 2022 Fall Semester</li>
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Honors and Awards</heading>
<p>
<li style="margin: 5px;"> 2024 National Scholarship, Tsinghua University</li>
<li style="margin: 5px;"> 2023 ChangXin Memory Scholarship, Tsinghua University</li>
<li style="margin: 5px;"> 2023 CVPR Outstanding Reviewer</li>
<li style="margin: 5px;"> 2021 Haining Talent Scholarship, Tsinghua University</li>
<li style="margin: 5px;"> 2020 Excellent graduation thesis, Tsinghua University</li>
<li style="margin: 5px;"> 2018 Zheng Geru Scholarship, Tsinghua University</li>
<li style="margin: 5px;"> 2017 Hongqian Electronics Scholarship, Tsinghua University</li>
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:0px">
<br>
<p style="text-align:right;font-size:small;">
<a href="https://jonbarron.info/">Website Template</a>
</p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
<p><center>
<div id="clustrmaps-widget" style="width:5%">
<script type="text/javascript" id="clstr_globe" src="//clustrmaps.com/globe.js?d=9CxOMjcpU9w-plJyTUCLdeFnIgwW-GgMgaWu0l1B-xk"></script>
</div>
<br>
© Ziyi Wang | Last updated: Jun 12, 2025
</center></p>
</body>
</html>