|
| 1 | +--- |
| 2 | +# static info |
| 3 | +layout: task |
| 4 | +year: 2026 |
| 5 | +hide: false |
| 6 | + |
| 7 | +# required info |
| 8 | +title: "Memorability: Predicting movie and commercial memorability" |
| 9 | +subtitle: Memorability |
| 10 | +blurb: "The goal of this task is to study the long-term memory performance when recognising small movie excerpts or commercial videos. We provide the videos, precomputed features or EEG features for the challenges proposed in the task such as how memorable a video, if a person is familiar with a video or if you can predict the brand memorability?" |
| 11 | +--- |
| 12 | + |
| 13 | +<!-- # please respect the structure below--> |
| 14 | +*See the [MediaEval 2026 webpage](https://multimediaeval.github.io/editions/2026/) for information on how to register and participate.* |
| 15 | + |
| 16 | +#### Task description |
| 17 | + |
| 18 | +The goal of this task is to study the long-term memory performance when recognising small movie excerpts or commercial videos. We provide the videos, precomputed features or EEG features for the challenges |
| 19 | +proposed in the task such as how memorable a video, if a person is familiar with a video or if you can predict the brand memorability? |
| 20 | + |
| 21 | +**Subtask 1: Movie Memorability**. This task studies the long-term memory performance when recognizing small movie excerpts. |
| 22 | +* _Challenge 1.1: How memorable is this video (movie excerpts)?_ - Video-based prediction: The goal of this task is to predict how memorable a video is based on movie excerpts. Participants are expected to develop automatic systems that predict the memorability scores of new videos. The memorability score indicates the probability of a video being remembered by viewers. To achieve this, participants will use a subset of the Movie Memorability dataset, which includes videos, their corresponding memorability scores. Participants are free to use only the modalities relevant to their approach, enabling a broad range of methodologies. |
| 23 | +* _Challenge 1.2: Is this person familiar with this video?_ - EEG-based detection of recall: This task requires participants to automatically detect whether a person is remembering a video from a movie they previously watched. To do this, participants may use only features extracted from the EEG data, without using any features from the videos themselves. |
| 24 | + |
| 25 | +**Subtask 2: Commercial/Ad Memorability.** This task evaluates long-term memory performance in recognising commercial videos. Participants will use the VIDEM dataset, which contains commercial videos along with their memorability and brand memorability scores, to train their systems. The trained models will then predict the scores for new, unseen commercial videos (product, brand, and concept presentations and discussions). This challenge does not include EEG data. |
| 26 | +* _Challenge 2.1: How memorable is this commercial video?_ - Video-based prediction: Like in challenge 1.1, the goal of this task is to predict how memorable a commercial video is. Therefore, participants are expected to develop automatic systems that predict the memorability scores of commercial videos. The memorability score indicates the probability of a commercial video being remembered by viewers. |
| 27 | +* _Challenge 2.2: Can you predict the brand memorability?_ - Video-based prediction: The goal of this task is to predict the brand memorability associated with a commercial video. Participants are expected to develop automatic systems that can predict the brand memorability score based on the content of the commercial video. This score indicates the probability of a commercial video brand being remembered by viewers. |
| 28 | + |
| 29 | +Participating teams will write short working-notes papers that are published in the MediaEval Workshop Working Notes Proceedings. We welcome two types of papers: first, conventional benchmarking papers, |
| 30 | +which describe the methods that the teams use to address the task and analyze the results and, second, "Quest for Insight" papers, which address a question aimed at gaining more insight into the task, but |
| 31 | +do not necessarily present task results. Example questions for "Question for Insight" papers are below. |
| 32 | + |
| 33 | +* #### Motivation and background |
| 34 | + |
| 35 | +In an era where visual content, such as movies and commercials, permeates our daily lives, understanding and predicting the memorability of multimedia content is becoming increasingly important. For marketers, filmmakers, |
| 36 | +and content creators, selecting and designing media that effectively captures attention and leaves a lasting impression is crucial for success. Commercials, in particular, need to engage viewers immediately and remain |
| 37 | +memorable to drive brand recognition and influence consumer behaviour. However, the potential applications of memorability prediction extend beyond commercial and advertising sectors. |
| 38 | + |
| 39 | +This task aims to develop models that predict the memorability of multimedia content by leveraging various content features. While the results can directly benefit professionals in advertising and film, the insights |
| 40 | +gained can also be applied to other fields, such as education, content retrieval, and beyond. For instance, educators can use memorability predictions to create more engaging learning materials, while content retrieval |
| 41 | +systems can enhance search and recommendation accuracy by prioritising content with higher memorability potential. |
| 42 | + |
| 43 | +This year’s task extends the state of the art by focusing on the memorability of multimedia content within the specific domains of movies and commercials. While previous research has explored the general memorability |
| 44 | +of videos and images, there has been limited focus on how this concept applies to the nuanced structure of films and advertisements. By addressing this gap, we aim to deepen our understanding of how human |
| 45 | +cognition interacts with multimedia, providing valuable insights into what makes content memorable and how it can be optimized for various applications across different industries, including both commercial and |
| 46 | +non-commercial use cases. |
| 47 | + |
| 48 | +_New for 2026._ |
| 49 | + |
| 50 | +For the 2026 edition, we are enhancing the provided datasets by releasing a set of **semantic annotations, contextual information, and informative attributes** for the existing video samples. By keeping the video |
| 51 | +set consistent but enriching the available data, we aim to encourage a more granular analysis of how specific video elements influence memorability. |
| 52 | + |
| 53 | +#### Target group |
| 54 | + |
| 55 | +Researchers interested in this task include those working in areas such as human perception, multimedia content analysis, cognitive science, and machine learning, particularly in the fields of image and video analysis, |
| 56 | +memorability, emotional response to media, aesthetics, and multimedia affective computing (though not limited to). |
| 57 | + |
| 58 | +This includes scholars focused on predictive modeling, user experience, and the cognitive impact of media, with a specific interest in movies, commercials, and educational content. Signal processing researchers can |
| 59 | +also bring valuable insights to this task by leveraging EEG signals to enhance the memorability of predictive models. Additionally, researchers exploring content retrieval, recommendation systems, and multimedia |
| 60 | +interaction, as well as those studying the influence of media on memory and learning, will find the task valuable. It will also appeal to those working on improving machine learning algorithms for content classification |
| 61 | +and understanding, especially in video and image domains, and those interested in applying these models across both commercial and non-commercial media, including educational and informational content. |
| 62 | + |
| 63 | +#### Data |
| 64 | + |
| 65 | +One dataset will be provided for each subtask. |
| 66 | + |
| 67 | +For subtask 1, a subset of the Movie Memorability dataset will be used. This is a collection of movie excerpts and corresponding ground-truth files based on the measurement of long-term memory performance when |
| 68 | +recognizing small movie excerpts from weeks to years after having viewed them. It is accompanied with audio and video features extracted from the movie excerpts. EEG data recorded while viewing this subset will |
| 69 | +be also provided. EEG data were recorded while 27 participants viewed a subset of clips from the dataset. The clips were selected to include both previously seen and unseen movies. After viewing each clip, participants |
| 70 | +were asked if they remembered seeing it before. In total 3484 epochs of 64 channel EEG data are available, of which 2122 were not recognised and 1362 were remembered. |
| 71 | + |
| 72 | +For subtask 2, the VIDEM (VIDeo Effectiveness and Memorability) dataset will be used. It focuses on video and brand memorability in commercial advertisements, including some educational or explanatory videos. |
| 73 | +Developed through a university-business collaboration between the University of Essex and Hub, with support from Innovate UK’s Knowledge Transfer Partnership (grant agreement No. 11071. This is a collection of |
| 74 | +commercial advertisements and corresponding ground-truth files based on the measurement of long-term memory performance when recognizing them from 24 to 72 hours after having viewed them. Each video is accompanied |
| 75 | +with metadata such as titles, descriptions, number of views, and duration and audio and video features extracted from the commercial advertisements. The dataset consists of 424 commercial videos sampled from a larger |
| 76 | +collection of 4791 videos published on YouTube between June 2018 and June 2021. Video lengths range from 7 seconds to 94 minutes. For longer videos, users are allowed to watch only 1 minute. |
| 77 | + |
| 78 | +#### Evaluation methodology |
| 79 | + |
| 80 | +Submissions for the video-based prediction challenges will be evaluated using Spearman’s rank correlation coefficient. Additional metrics, such as Mean Squared Error (MSE), may also be used to assess prediction |
| 81 | +accuracy. For Challenge 1.2 (EEG-based detection of recall), submissions will be evaluated based on accuracy. |
| 82 | + |
| 83 | +#### Quest for insight |
| 84 | + |
| 85 | +* How do factors like the emotional content, subject matter, or cultural context of media influence its memorability? |
| 86 | +* How well do machine-predicted memorability scores align with human cognitive processes involved in memory formation? |
| 87 | +* Is there a relationship between the aesthetic quality of media and its memorability, or do these factors function independently? |
| 88 | +* Is there a difference between what causes memory recall in movie clips versus what causes memory recall in commercial videos? |
| 89 | +* What transformations or enhancements can be applied to media content to increase its memorability without altering its core message? |
| 90 | +* Which EEG signals (e.g., specific frequency bands or event-related potentials) are most predictive of media memorability? |
| 91 | +* To what extent do EEG patterns associated with memorable media generalize across different individuals? |
| 92 | +* What are the differences between subject-specific and subject-agnostic models in the EEG classification task? |
| 93 | + |
| 94 | +#### References and recommended reading |
| 95 | + |
| 96 | +[1] 2018 R.Cohendet, K. Yadati, N. Q. Duong and C.-H. Demarty. [Annotating, understanding, and predicting long-term video memorability](https://dl.acm.org/doi/abs/10.1145/3206025.3206056). In Proceedings of the ICMR 2018 Conference, Yokohama, Japan, June 11-14, 2018. |
| 97 | + |
| 98 | +[2] 2025. R. S. Kiziltepe, S. Sahab, R. Valladares Santana, F. Doctor, K. Paterson, D. Hunstone and A. García Seco de Herrera. VIDEM: VIDeo Effectiveness and Memorability Dataset. In Proceedings of the 18th International Work-Conference on Artificial Neural Networks (IWANN 2025), A Coruña, Spain, June 16–18, 2025. |
| 99 | + |
| 100 | +[3] 2014. Phillip Isola, Jianxiong Xiao, Devi Parikh, Antonio Torralba, and Aude Oliva. [What makes a photograph memorable?](https://ieeexplore.ieee.org/document/6629991/) IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 7 (2014), 1469–1482. |
| 101 | + |
| 102 | +[4] 2023. Dumont, T., Hevia, J. S., & Fosco, C. L. [Modular memorability. Tiered representations for video memorability prediction.](https://openaccess.thecvf.com/content/CVPR2023/papers/Dumont_Modular_Memorability_Tiered_Representations_for_Video_Memorability_Prediction_CVPR_2023_paper.pdf) In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10751-10760). |
| 103 | + |
| 104 | +[5] 2025. Kumar, P. et al. [Eye vs. AI: Human Gaze and Model Attention in Video Memorability.](https://arxiv.org/pdf/2311.16484) In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Tucson, AR, USA, 2025. |
| 105 | + |
| 106 | +[6] 2025. SI, H.et al. [Long-Term Memorability On Advertisements.](https://arxiv.org/pdf/2309.00378) In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Tucson, AR, USA, 2025. |
| 107 | + |
| 108 | +#### Task organizers |
| 109 | + |
| 110 | +* Alba García Seco de Herrera, UNED, Spain (lead task organiser); |
| 111 | +* Sebastian Halder, Ana Matran-Fernandez, University of Essex, UK; |
| 112 | +* Mihai Gabriel Constantin, Bogdan Ionescu, University Politehnica of Bucharest, Romania; |
| 113 | +* Claire-Hélène Demarty, InterDigital, R&I, France; |
| 114 | +* Rukiye Savran Kiziltepe, Ankara University, Türkiye; |
| 115 | +* Iván Martín-Fernández, Manuel Gil Martín, Technical University of Madrid (UPM), Spain; |
| 116 | +* Aashutosh Ganesh, Maastricht University, The Netherlands. |
| 117 | + |
| 118 | +#### Task schedule |
| 119 | + |
| 120 | +The program will be updated with the exact dates. |
| 121 | + |
| 122 | +#### Acknowledgements |
| 123 | + |
| 124 | +More details will follow. |
0 commit comments