FME-24 Dataset

Here are the available CSV files for the FME-24 dataset. You can download them by clicking the respective download buttons.

The first four CSVs (March 2026 Update) listed here at the top of the page are the most up to date CSVs used for regression analysis for V-A prediction and participants demographics and GoldMSI responses.

The second CSVs (Sept 2025 Update) here are in a different format for categorical data based on emotion sentence semantic analysis, where there are before and after features extracted and composite features included, this is dated and advised not to used.

MARCH 2026 Update: CSV for continuous emotion analysis V-A (regression)

librosa_features_full_fme_dataset_MARCH_2026_NC.csv

Main CSV: This CSV contains 1784 rows and 91 columns, including 78 acoustic, spectral, harmonic, and rhythm features extracted from 422 audio excerpts across 98 participants and 275 unique songs, providing a comprehensive dataset for audio-emotion and music analysis, with V-A annotations, timestamps, familiarity ratings and emotion sentences.

fme-demographics-music-sophistication.csv

Participants info: GoldMSI and Demographics CSV. This CSV contains GoldMSI responses and demographic information about the participants.

per-song-fme.csv

This CSV contains the raw data in a different format, where each row is one of the excerpts which includes missing data and coordinates together as arrays rather than per column, this is how the data was extracted raw from the online experiment.

film-emotion-music-datasheet.csv

This CSV contains metadata on the music/ films/ excerpts and quality of the film music used.

ALL 2026 CSVs

SEPT 2025 Update: CSVs for emotion category analysis (not V-A)

fme-24-features-emotions-after.csv

The after CSV contains extracted audio features, arousal-valence values, and annotated emotion categories for each 2-second segment following participants’ emotion-change clicks.

fme-24-features-emotions-before.csv

The before CSV contains extracted audio features, arousal-valence values, annotated emotion categories, and emotion sentences for each 2-second segment immediately preceding participants’ emotion-change clicks.

OLDER VERSIONS: