Personalized Prediction Clothing Algorithm Based on Multi-modal Feature Fusion

Rong, Liu (2024) Personalized Prediction Clothing Algorithm Based on Multi-modal Feature Fusion. Masters thesis, Universiti Malaysia Sarawak.

[img] PDF
Thesis Master_Liu Rong.pdf

Download (5MB)
Official URL: https://doi.org/10.46604/ijeti.2024.13394

Abstract

Information about clothing products has shown a steady growth trend in recent years, thanks to advances in information technology and material standards. Fashion consumers struggle to choose clothing that meets their needs from massive data due to the surge in clothing products. Previous research predicted single clothing attributes, and the model's generalization ability was weak. This results in low personalized prediction accuracy, making it difficult to meet fashion consumers' personalized needs. Therefore, this thesis proposes a multi-modal fusion algorithm for predicting personalized clothing. The algorithm is designed to help consumers make more informed purchasing decisions by analysing their personal preferences and predicting fashion clothing categories. An analysis of Nunalie's sales from October 2016 to December 2019 is presented using the publicly available real sales dataset, Visuelle. There are 5355 clothing products in this data set, along with 45MB of sales data. In addition to unstructured image data, structured data consists of 21 columns, including 1-12 weeks of clothing sales data, as well as details of season, color, fabric, day, week, month, year, label, category. This thesis proposes four deep convolutional neural network (CNN) models—TCN-ResNet, TCN-2DCNN, 1DCNN-ResNet, and 1DCNN-2DCNN—that integrate the multimodal features of clothing images and sales text data. A comparison of model prediction accuracy reveals that the 1DCNN-2DCNN and 1DCNN-ResNet models demonstrate superior performance in clothing prediction. To assess the generalization ability of the two models, cross-validation was performed. The experimental results indicate that the 1DCNN-2DCNN model exhibits superior generalization, achieving a recall rate of 97.20%, an F1 score of 98.60%, a macro average of 98.62%, a weighted average of 98.63%, and model accuracy of 98.59%. Personalized prediction and clothing classification have been achieved. Through the analysis of hybrid models, the superiority of the proposed model in solving personalized preferences is demonstrated.

Item Type: Thesis (Masters)
Uncontrolled Keywords: Fashion consumers, image data, clothing categories, personalized, multimodal
Subjects: T Technology > TK Electrical engineering. Electronics Nuclear engineering
Divisions: Academic Faculties, Institutes and Centres > Faculty of Engineering
Faculties, Institutes, Centres > Faculty of Engineering
Depositing User: LIU RONG
Date Deposited: 06 Feb 2025 05:55
Last Modified: 06 Feb 2025 05:55
URI: http://ir.unimas.my/id/eprint/47418

Actions (For repository members only: login required)

View Item View Item