Leveraging state-of-the-art advancements from natural language processing (NLP), this paper pioneers the application of Transformer Neural Networks in decoding electroencephalographic (EEG) data for cognitive analysis. Harnessing the transformative power of Transformers, traditionally utilized in language understanding, this study introduces the NetTraST (Network Transformer Spatio-Temporal) model for EEG classification tasks. NetTraST integrates the paradigmshifting capabilities of Transformer architectures with Conv1D, establishing a dual-branch architecture for extracting and processing spatial EEG features, capturing intricate neural spatial dynamics and temporal relationships within EEG sequences, illuminating nuanced temporal dynamics crucial for cognitive analysis. Transformer architecture enables the assimilation of cutting-edge techniques from the NLP domain and empowers the model to comprehend complex EEG patterns. Rigorous evaluation across diverse datasets—Thinking Out Loud, Kumar’s, and Kaneshiro’s EEG datasets—demonstrates NetTraST’s superiority over existing EEG classification methodologies not based on Transformer networks. Noteworthy advancements in accuracy substantiate the model’s efficacy, surpassing previous benchmarks in multiple cognitive tasks. Furthermore, an indepth ablation study underscores the pivotal roles played by the dual branches, shedding light on their contributions to overall performance enhancement. The introduction of NetTraST marks a pivotal stride, not only harnessing the potential of Transformer architectures from NLP but also emphasizing the significance of a dual-branch approach in EEG analysis. This research’s used data and implementation code is accessible at https://github.com/ignaziogallo/gallo-IJCNN2024

Thinking is Like Processing a Sequence of Spatial and Temporal Words

Ignazio Gallo
;
Silvia Corchs
2024-01-01

Abstract

Leveraging state-of-the-art advancements from natural language processing (NLP), this paper pioneers the application of Transformer Neural Networks in decoding electroencephalographic (EEG) data for cognitive analysis. Harnessing the transformative power of Transformers, traditionally utilized in language understanding, this study introduces the NetTraST (Network Transformer Spatio-Temporal) model for EEG classification tasks. NetTraST integrates the paradigmshifting capabilities of Transformer architectures with Conv1D, establishing a dual-branch architecture for extracting and processing spatial EEG features, capturing intricate neural spatial dynamics and temporal relationships within EEG sequences, illuminating nuanced temporal dynamics crucial for cognitive analysis. Transformer architecture enables the assimilation of cutting-edge techniques from the NLP domain and empowers the model to comprehend complex EEG patterns. Rigorous evaluation across diverse datasets—Thinking Out Loud, Kumar’s, and Kaneshiro’s EEG datasets—demonstrates NetTraST’s superiority over existing EEG classification methodologies not based on Transformer networks. Noteworthy advancements in accuracy substantiate the model’s efficacy, surpassing previous benchmarks in multiple cognitive tasks. Furthermore, an indepth ablation study underscores the pivotal roles played by the dual branches, shedding light on their contributions to overall performance enhancement. The introduction of NetTraST marks a pivotal stride, not only harnessing the potential of Transformer architectures from NLP but also emphasizing the significance of a dual-branch approach in EEG analysis. This research’s used data and implementation code is accessible at https://github.com/ignaziogallo/gallo-IJCNN2024
2024
2024 International Joint Conference on Neural Networks (IJCNN)
International Joint Conference on Neural Networks (IJCNN)
Yokohama, Japan
June 30th to July 5th, 2024
File in questo prodotto:
File Dimensione Formato  
1570981344.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 759.62 kB
Formato Adobe PDF
759.62 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11383/2174291
 Attenzione

L'Ateneo sottopone a validazione solo i file PDF allegati

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact