Information

EEG segmentation and denoising which one should done first?

EEG segmentation and denoising which one should done first?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Because of EEG nature, in EEG analysis often researcher use Windowing/segmentation method.

As i currently work on sleep study, and in my study i need to analysis EEG data, i came to a problem.

My problem: should i filter and Denise my EEG signal and then segment it or no i should firstly segment my EEG and then filter and denoise each epoch separately?

And as extra question, is there any difference between these two in signal processing view?


Depending on whom you ask, you get both answers. That's mostly dependent on the denoising algorithm. If you're using an adaption of GST, you are segmenting first with an n-1 overlap.

If you're using S3P I'd recommend not segmenting, since (in my experience) there is a slight difference, with unsegmented data having slightly closer results to Fernandez and Li, 2003 and Frangakis and Hegerl, 2001.


Conclusion

The current study shows that although there is increasing evidence that categorization often results from feedforward processing (Thorpe et al., 1996 Riesenhuber and Poggio, 2000 Liu et al., 2002 Serre et al., 2007), while segmentation requires recurrent processing (Lamme and Roelfsema, 2000 Appelbaum et al., 2006 Fahrenfort et al., 2012), segmentation nevertheless precedes category-selective responses when objects lack low-level image properties to aid in fast categorization. Our results increase the understanding of the inter-relation between segmentation and categorization, and the speed of category-selective responses under varying circumstances.


Introduction

Humans and other foveate animals – such as monkeys and birds of prey – visually scan scenes with a characteristic fixate-saccade-fixate pattern: periods of relative stability are interspersed with rapid shifts of gaze. During “fixation” the visual axis (and high-resolution foveola) is directed to an object or location of interest. For humans, the duration of the periods of stability is on the order of 0.2–0.3 s, depending on a number of factors such as task and stimulus complexity. The typical duration of saccadic eye movements in on the order of 0.01–0.1 s, depending systematically on the amplitude of the movement 1 .

If the scene contains moving target objects, or when the observer is moving through it, then stabilization of gaze on a focal object or location requires a “tracking fixation”, i.e. a smooth pursuit eye movement. Here the eye rotates to keep gaze fixed on the target. Also, when the observer’s head is bouncing due to locomotion or external perturbations, gaze stabilization involves vestibulo-ocular and optokinetic compensatory eye movements. In natural behavior, all the eye movement “types” mentioned above are usually simultaneously present, and cannot necessarily be differentiated from one another in terms of oculomotor properties or underlying neurophysiology 2,3,4,5 .

It is possible to more or less clearly experimentally isolate each of the aforementioned “types” in experiments that tightly physically constrain the visible stimuli and the patterns of movement the subject is allowed to make. Much of what we know about oculomotor control circuits is based on such laboratory experiments where the participant’s head is fixed with a chin rest or a bite bar, and the stimulus and task are restricted so as to elicit only a specific eye movement type. In order to understand how gaze control is used in natural behavior, however, it is essential to be able to meaningfully compare oculomotor behavior observed in constrained laboratory recordings to gaze recordings “in the wild” 2,5,6,7 .

Laboratory grade systems typically have very high accuracy and very low noise levels. Sampling frequencies may range from 500 to as high as 2000 Hz. As the subject’s behavior is restricted, it is possible to tailor custom event identification methods that rely on only the eye movement type of interest being present in the data (and would produce spurious results with data from free eye movement behavior). On the other hand, mobile measuring equipment has much lower accuracy and relatively high levels of noise, with sampling frequency typically between 30 and 120 Hz. The subject’s behavior is complex, calling for robust event identification that works when all eye movement types are simultaneously present. Unfortunately, these different requirements have led, and increasingly threaten to lead, the methodologies and concepts of “laboratory” and “naturalistic” research into diverging directions. For wider generalizability of results, it would be desirable to analyze eye movement events in a similar way across task settings, by using event detection methods that do not rely on restrictions or assumptions which are not valid for most natural behavior.

Here, we introduce Naive Segmented Linear Regression (NSLR), a new method for eye-movement signal denoising and segmentation, and a related event classification method based on Hidden Markov Models (NSLR-HMM). The approach is novel in that it differs in concept from the traditional workflow of pre-filtering, event detection and segmentation. Instead, it integrates denoising into segmentation which is now the first – rather than the last – step in the analysis, and then performs classification on the denoised segments (rather than sample-to-sample). The method is general in two ways: Firstly, it performs a four-way identification of fixations, saccades, smooth pursuits and post-saccadic oscillations, which allows for experiments with complex gaze behavior. Secondly, it can be directly applied to noisy data to recover robust gaze position and velocity estimates, which means it can be used on both high-quality lab data and more challenging mobile data on natural gaze behavior. The method also automatically estimates the signal’s noise level and determines gaze feature parameters from human classification examples in a data-driven manner, requiring minimal manual parameter setting.

We believe this is an important development direction for eye movement signal analysis as this can help counteract the historical tendency in the field of eye tracking to develop operational definitions of eye movement “types” that are based on very specific and restrictive oculomotor tasks and event identification methods tailor-made for them (and then “reify” the types as separate phenomena). In contrast, our method has a number of desirable features that compare favorably with the state of the art and will in part help harmonize the traditional oculomotor and more naturalistic gaze behavior research traditions:

The NSLR method is based on a few simple and intuitively transparent basic concepts.

It requires no signal preprocessing (e.g. filtering, as denoising is inherent into the segmentation step), and no user-defined filtering parameters.

Segmentation is conceptually parsimonious and uses only a few parameters (that can be estimated from the data itself).

No “ground truth” training data from human annotators is necessary for segmentation. (Human coding data is needed for classification, which is treated as a separate subproblem).

The HMM classifier can identify four types of eye movement (saccade, PSO, fixation, pursuit).

This classification uses global signal information. (It is not based on sample-wise application simple criteria such as duration or velocity thresholds).

Because of its wide range of oculomotor event identification and powerful denoising performance it can be used for both low-noise laboratory data in tasks that only elicit one or two types of oculomotor events and high-noise field data collected during complex behavior. This is desirable for harmonizing the gaze behavior (in the wild) and oculomotor event identification (in the laboratory) perspectives on eye movement behavior.

Full C++ and Python implementation of the method is available under an open source license at https://gitlab.com/nslr/.


Consumer neuroscience: Assessing the brain response to marketing stimuli using electroencephalogram (EEG) and eye tracking

Application of neuroscience methods to analyze and understand human behavior related to markets and marketing exchange has recently gained research attention. The basic aim is to guide design and presentation of products to optimize them to be as compatible as possible with consumer preferences. This paper investigates physiological decision processes while participants undertook a choice task designed to elicit preferences for a product. The task required participants to choose their preferred crackers described by shape (square, triangle, round), flavor (wheat, dark rye, plain) and topping (salt, poppy, no topping). The two main research objectives were (1) to observe and evaluate the cortical activity of the different brain regions and the interdependencies among the Electroencephalogram (EEG) signals from these regions and (2) unlike most research in this area that has focused mainly on liking/disliking certain products, we provide a way to quantify the importance of different cracker features that contribute to the product design based on mutual information. We used the commercial Emotiv EPOC wireless EEG headset with 14 channels to collect EEG signals from participants. We also used a Tobii-Studio eye tracker system to relate the EEG data to the specific choice options (crackers). Subjects were shown 57 choice sets each choice set described three choice options (crackers). The patterns of cortical activity were obtained in the five principal frequency bands, Delta (0–4 Hz), Theta (3–7 Hz), Alpha (8–12 Hz), Beta (13–30 Hz), and Gamma (30–40 Hz). There was a clear phase synchronization between the left and right frontal and occipital regions indicating interhemispheric communications during the chosen task for the 18 participants. Results also indicated that there was a clear and significant change (p < 0.01) in the EEG power spectral activities taking a place mainly in the frontal (delta, alpha and beta across F3, F4, FC5 and FC6), temporal (alpha, beta, gamma across T7), and occipital (theta, alpha, and beta across O1) regions when participants indicated their preferences for their preferred crackers. Additionally, our mutual information analysis indicated that the various cracker flavors and toppings of the crackers were more important factors affecting the buying decision than the shapes of the crackers.

Highlights

► This paper investigates physiological decision processes during decision making. ► The task required participants to choose their preferred crackers (shape, flavour and topping). ► We observe and evaluate the cortical activity of the different brain regions. ► We quantify the importance of different cracker features using mutual information. ► A clear phase synchronization observed between left and right frontal and occipital regions.


EEG assessment of brain activity: Spatial aspects, segmentation and imaging

High temporal resolution and sensitivity to index different functional brain states makes the EEG a powerful tool in psychophysiology. Its full potential can now be utilized since recording technology and computational power for the large data masses has become affordable. However, basic traditional strategies in EEG need reviewing.

Conventional, spontaneous or evoked EEG traces which are used for various complex analyses give ambiguous information on EEG power (amplitude) and phase for a given point on the scalp. Principally, analysis should first be done over space, then over time, to avoid ambiguities or pre-selections. First or second spatial derivative computations can provide “reference-free” data for analyses over time. We propose to use direct, spatial approaches for the analysis of the scalp EEG field distributions when simultaneous recording in several EEG channels can be examined.

The ambiguity of the conventional EEG waveshapes results in different, equally “correct” scalp maps of EEG power of the same multichannel data for different reference electrodes. An exeption are scalp maps of EEG power computed against the common, average reference, as they are related to the reference-free spatial distribution (maps) of the maximal and minimal (extreme) field values over time, and thus are directly interpretable in terms of net orientation of the generator process.

A proposed, reference-free EEG segmentation into epochs of periodically stationary spatial distributions of the mapped scalp EEG fields uses the locations of maximal and minimal (extreme) field values at each moment in time as classifiers, and thus avoids the priviledging of two arbitrarily chosen recording points in the field.


Segmentation of design protocol using EEG

Design protocol data analysis methods form a well-known set of techniques used by design researchers to further understand the conceptual design process. Verbal protocols are a popular technique used to analyze design activities. However, verbal protocols are known to have some limitations. A recurring problem in design protocol analysis is to segment and code protocol data into logical and semantic units. This is usually a manual step and little work has been done on fully automated segmentation techniques. Physiological signals such as electroencephalograms (EEG) can provide assistance in solving this problem. Such problems are typical inverse problems that occur in the line of research. A thought process needs to be reconstructed from its output, an EEG signal. We propose an EEG-based method for design protocol coding and segmentation. We provide experimental validation of our methods and compare manual segmentation by domain experts to algorithmic segmentation using EEG. The best performing automated segmentation method (when manual segmentation is the baseline) is found to have an average deviation from manual segmentations of 2 s. Furthermore, EEG-based segmentation can identify cognitive structures that simple observation of design protocols cannot. EEG-based segmentation does not replace complex domain expert segmentation but rather complements it. Techniques such as verbal protocols are known to fail in some circumstances. EEG-based segmentation has the added feature that it is fully automated and can be readily integrated in engineering systems and subsystems. It is effectively a window into the mind.


Abstract

In numerous signal processing applications, non-stationary signals should be segmented to piece-wise stationary epochs before being further analyzed. In this article, an enhanced segmentation method based on fractal dimension (FD) and evolutionary algorithms (EAs) for non-stationary signals, such as electroencephalogram (EEG), magnetoencephalogram (MEG) and electromyogram (EMG), is proposed. In the proposed approach, discrete wavelet transform (DWT) decomposes the signal into orthonormal time series with different frequency bands. Then, the FD of the decomposed signal is calculated within two sliding windows. The accuracy of the segmentation method depends on these parameters of FD. In this study, four EAs are used to increase the accuracy of segmentation method and choose acceptable parameters of the FD. These include particle swarm optimization (PSO), new PSO (NPSO), PSO with mutation, and bee colony optimization (BCO). The suggested methods are compared with other most popular approaches (improved nonlinear energy operator (INLEO), wavelet generalized likelihood ratio (WGLR), and Varri’s method) using synthetic signals, real EEG data, and the difference in the received photons of galactic objects. The results demonstrate the absolute superiority of the suggested approach.


  1. ^ Fieldtrip is an open source Matlab toolbox for EEG and MEG analysis (Oostenveld et al., 2011).
  2. ^ Development of the MacBrain Face Stimulus Set was overseen by Nim Tottenham and supported by the John D. and Catherine T. MacArthur Foundation Research Network on Early Experience and Brain Development. Please contact Nim Tottenham at [email protected] for information concerning the stimulus set.

Ahissar, M., and Hochstein, S. (2004). The reverse hierarchy theory of visual perceptual learning. Trends Cogn. Sci. Regul. Ed. 8, 457�. doi: 10.1016/j.tics.2004.08.011

Ahissar, M., Nahum, M., Nelken, I., and Hochstein, S. (2009). Reverse hierarchies and sensory learning. Philos. Trans. R Soc. Lond. B Biol. Sci. 364, 285�. doi: 10.1098/rstb.2008.0253

Allison, T., Puce, A., Spencer, D. D., and McCarthy, G. (1999). Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cereb. Cortex 9, 415�. doi: 10.1093/cercor/9.5.415

Appelbaum, L. G., Wade, A. R., Vildavski, V. Y., Pettet, M. W., and Norcia, A. M. (2006). Cue-invariant networks for figure and background processing in human visual cortex. J. Neurosci. 26, 11695�. doi: 10.1523/JNEUROSCI.2741-06.2006

Bach, M., and Meigen, T. (1992). Electrophysiological correlates of texture segregation in the human visual evoked potential. Vision Res. 32, 417�. doi: 10.1016/0042-6989(92)90233-9

Bach, M., and Meigen, T. (1998). Electrophysiological correlates of human texture segregation, an overview. Doc. Ophthalmol. 95, 335�. doi: 10.1023/A:1001864625557

Baker, C. L., and Mareschal, I. (2001). Processing of second-order stimuli in the visual cortex. Prog. Brain Res. 134, 171�. doi: 10.1016/S0079-6123(01)34013-X

Caputo, G., and Casco, C. (1999). A visual evoked potential correlate of global figure-ground segmentation. Vision Res. 39, 1597�. doi: 10.1016/S0042-6989(98)00270-3

Censor, N., Bonneh, Y., Arieli, A., and Sagi, D. (2009). Early-vision brain response which predict human visual segmentation and learning. J. Vis. 9, 12.1�.9. doi: 10.1167/9.4.12

Eimer, M. (2000a). Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clin. Neurophysiol. 111, 694�. doi: 10.1016/S1388-2457(99)00285-0

Eimer, M. (2000b). The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport 11, 2319�. doi: 10.1097/00001756-200007140-00050

Epstein, R., and Kanwisher, N. (1998). A cortical representation of the local visual environment. Nature 392, 598�. doi: 10.1038/33402

Fahrenfort, J. J., Scholte, H. S., and Lamme, V. A. F. (2007). Masking disrupts reentrant processing in human visual cortex. J. Cogn. Neurosci. 19, 1488�. doi: 10.1162/jocn.2007.19.9.1488

Fahrenfort, J. J., Snijders, T. M., Heinen, K., van Gaal, S., Scholte, H. S., and Lamme, V. A. F. (2012). Neuronal integration in visual cortex elevates face category tuning to conscious face perception. Proc. Natl. Acad. Sci. U.S.A. 109, 21504�. doi: 10.1073/pnas.1207414110

Gauthier, I., Tarr, M., Anderson, A. W., Skudlarski, P., and Gore, J. C. (1999). Activation of the middle fusiform � area” increases with expertise in recognizing novel objects. Nat. Neurosci. 2, 568�. doi: 10.1038/9224

Gervan, P., Gombos, F., and Kovacs, I. (2012). Perceptual learning in Williams syndrome: looking beyond averages. PLoS ONE 7:e40282. doi: 10.1371/journal.pone.0040282

Golarai, G., Grill-Spector, K., and Reiss, A. L. (2006). Autism and the development of face processing. Clin. Neurosci. Res. 6, 145�. doi: 10.1016/j.cnr.2006.08.001

Gratton, G., Coles, M. G. H., and Donchin, E. (1983). A new method for off-line removal of ocular artifact. Electroencephalogr. Clin. Neurophysiol. 55, 468�. doi: 10.1016/0013-4694(83)90135-9

Grill-Spector, K., and Kanwisher, N. (2005). Visual recognition: as soon as you know it is there, you know what it is. Psychol. Sci. 16, 152�. doi: 10.1111/j.0956-7976.2005.00796.x

Grill-Spector, K., Kushnir, T., Hendler, T., and Malach, R. (2000). The dynamics of object-selective activation correlate with recognition performance in humans. Nat. Neurosci. 3, 837�. doi: 10.1038/77754

Hochstein, S., and Ahissar, M. (2002). View from the top: hierarchies and reverse hierarchies in the visual system. Neuron 36, 791�. doi: 10.1016/S0896-6273(02)01091-7

Itier, R. J., and Taylor, M. J. (2004). Source analysis of the N170 to faces and objects. Neuroreport 15, 1261�. doi: 10.1097/01.wnr.0000127827.73576.d8

Kanwisher, N., McDermott, J., and Chun, M. M. (1997). The fusiform face area: a module in human extrastriate cortex specialized for face perception. J. Neurosci. 17, 4302�.

Koivisto, M., Railo, H., Revonsuo, A., Vanni, S., and Salminen-Vaparanta, N. (2011). Recurrent processing in V1/V2 contributes to categorization of natural scenes. J. Neurosci. 31, 2488�. doi: 10.1523/JNEUROSCI.3074-10.2011

Kovผs, G., Vogels, R., and Orban, G. A. (1995). Selectivity of macaque inferior temporal neurons for partially occluded shapes. J. Neurosci. 15, 1984�.

Lamme, V. A. (1995). The neurophysiology of figure-ground segregation in primary visual cortex. J. Neurosci. 15, 1605�.

Lamme, V. A., and Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends Neurosci. 23, 571�. doi: 10.1016/S0166-2236(00)01657-X

Lamme, V. A., Supèr, H., and Spekreijse, H. (1998). Feedforward, horizontal, and feedback processing in the visual cortex. Curr. Opin. Neurobiol. 8, 529�. doi: 10.1016/S0959-4388(98)80042-1

Lamme, V. A., Van Dijk, B. W., and Spekreijse, H. (1992). Texture segregation is processed by primary visual cortex in man and monkey. Evidence from VEP experiments. Vision Res. 32, 797�. doi: 10.1016/0042-6989(92)90022-B

Liu, H., Agam, Y., Madsen, J. R., and Kreiman, G. (2009). Timing, timing, timing: fast decoding of object information from intracranial field potentials in human visual cortex. Neuron 62, 281�. doi: 10.1016/j.neuron.2009.02.025

Liu, J., Harris, A., and Kanwisher, N. (2002). Stages of processing in face perception: an MEG study. Nat. Neurosci. 5, 910�. doi: 10.1038/nn909

Mack, M. L., Gauthier, I., Sadr, J., and Palmeri, T. J. (2008). Object detection and basic-level categorization: sometimes you know it is there before you know what it is. Psychon. Bull. Rev. 15, 28�. doi: 10.3758/PBR.15.1.28

Malach, R., Reppas, J. B., Benson, R. R., Kwong, K. K., Jiang, H., Kennedy, W. A., et al. (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proc. Natl. Acad. Sci. U.S.A. 92, 8135�. doi: 10.1073/pnas.92.18.8135

Maris, E., and Oostenveld, R. (2007). Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 164, 177�. doi: 10.1016/j.jneumeth.2007.03.024

McKone, E., and Robbins, R. (2011). 𠇊re faces special?,” in Oxfort Handbook of Face Perception, eds A. Calder, G. Rhodes, M. Jonshon, and J. Haxby (New York: Oxford University Press Inc.).

Meeren, H. K. M., Hadjikhani, N., Ahlfors, S. P., Hämäläinen, M. S., and De Gelder, B. (2008). Early category-specific cortical activation revealed by visual stimulus inversion. PLoS ONE 3:e3503. doi: 10.1371/journal.pone.0003503

Mitsudo, T., Kamio, Y., Goto, Y., Nakashima, T., and Tobimatsu, S. (2011). Neural responses in the occipital cortex to unrecognizable faces. Clin. Neurophysiol. 122, 708�. doi: 10.1016/j.clinph.2010.10.004

Moutoussis, K., and Zeki, S. (2002). The relationship between cortical activation and perception investigated with invisible stimuli. Proc. Natl. Acad. Sci. U.S.A. 99, 9527�. doi: 10.1073/pnas.142305699

Oostenveld, R., Fries, P., Maris, E., and Schoffelen, J.-M. (2011). FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. 2011, 1𠄹. doi: 10.1155/2011/156869

Peterson, M. (1993). Shape recognition inputs to figure-ground organization in three-dimensional grounds. Cognit. Psychol. 25, 383�. doi: 10.1006/cogp.1993.1010

Peterson, M. (1994). Object recognition contributions to figure-ground organization: operations on outlines and subjective contours. Percept. Psychophys. 56, 551�. doi: 10.3758/BF03206951

Riesenhuber, M., and Poggio, T. (2000). Models of object recognition. Nat. Neurosci. 3(Suppl.), 1199�. doi: 10.1038/81479

Rossion, B., and Jacques, C. (2008). Does physical interstimulus variance account for early electrophysiological face sensitive responses in the human brain? Ten lessons on the N170. Neuroimage 39, 1959�. doi: 10.1016/j.neuroimage.2007.10.011

Rossion, B., and Jacques, C. (2012). “The N170: understanding the time course of face perception in the human brain,” in The Oxford Handbook of Event-Related Potential Components, eds S. J. Luck and E. S. Kappenman (Oxford: Oxford University Press). 115�.

Rossion, B., Joyce, C. A., Cottrell, G. W., and Tarr, M. J. (2003). Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. Neuroimage 20, 1609�. doi: 10.1016/j.neuroimage.2003.07.010

Rousselet, G. A., Gaspar, C. M., Pernet, C. R., Husk, J. S., Bennett, P. J., and Sekuler, A. B. (2010). Healthy aging delays scalp EEG sensitivity to noise in a face discrimination task. Front. Psychol. 1:19. doi: 10.3389/fpsyg.2010.00019

Rubin, E. (1915/1958). 𠇏igure and ground,” in Readings in Perception, eds D. C. Beardslee and M. Wertheimer (Princeton, NJ: Van Nostrand). 194�. (Original work published 1915).

Sagiv, N., and Bentin, S. (2001). Structural encoding of human and schematic faces: holistic and part-based processes. J. Cogn. Neurosci. 13, 937�. doi: 10.1162/089892901753165854

Scholte, H. S., Jolij, J., Fahrenfort, J. J., and Lamme, V. A. F. (2008). Feedforward and recurrent processing in scene segmentation: electroencephalography and functional magnetic resonance imaging. J. Cogn. Neurosci. 20, 2097�. doi: 10.1162/jocn.2008.20142

Serre, T., Oliva, A., and Poggio, T. (2007). A feedforward architecture accounts for rapid categorization. Proc. Natl. Acad. Sci. U.S.A. 104, 6424�. doi: 10.1073/pnas.0700622104

Snijders, T. M., Kooijman, V., Cutier, A., and Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Res. 1178, 106�. doi: 10.1016/j.brainres.2007.07.080

Thierry, G., Martin, C. D., Downing, P., and Pegna, A. J. (2007). Controlling for interstimulus perceptual variance abolishes N170 face selectivity. Nat. Neurosci. 10, 505�. doi: 10.1038/nn1864

Thorpe, S., Fize, D., and Marlot, C. (1996). Speed of processing in the human visual system. Nature 381, 520�. doi: 10.1038/381520a0

Tong, F., Nakayama, K., Vaughan, J. T., and Kanwisher, N. (1998). Binocular rivalry and visual awareness in human extrastriate cortex. Neuron 21, 753�. doi: 10.1016/S0896-6273(00)80592-9

Vecera, S. P., and O’Reilly, R. C. (1998). Figure-ground organization and object recognition processes: an interactive account. J. Exp. Psychol. Hum. Percept. Perform. 24, 441�. doi: 10.1037/0096-1523.24.2.441

Wolfe, J. M. (1983). Influence of spatial frequency, luminance, and duration on binocular rivalry and abnormal fusion of briefly presented dichoptic stimuli. Perception 12, 447�. doi: 10.1068/p120447

World Medical Association. (2013). World medical association declaration of helsinki: ethical principles for medical research involving human subjects. JAMA 310, 2191�. doi:10.1001/jama.2013.281053

Wyatte, D., Curran, T., and O’Reilly, R. (2012). The limits of feedforward vision: recurrent processing promotes robust object recognition when objects are degraded. J. Cogn. Neurosci. 24, 2248�. doi: 10.1162/jocn_a_00282

Yantis, S., and Serences, J. T. (2003). Cortical mechanisms of space-based and object-based attentional control. Curr. Opin. Neurobiol. 13, 187�. doi: 10.1016/S0959-4388(03)00033-3

Zipser, K., Lamme, V. A., and Schiller, P. H. (1996). Contextual modulation in primary visual cortex. J. Neurosci. 16, 7376�.

Keywords : EEG, face processing, visual system, low-level vision, high-level vision, categorization

Citation: Van Den Boomen C, Fahrenfort JJ, Snijders TM and Kemner C (2015) Segmentation precedes face categorization under suboptimal conditions. Front. Psychol. 6:667. doi: 10.3389/fpsyg.2015.00667

Received: 24 February 2015 Accepted: 07 May 2015
Published online: 26 May 2015.

Carl M. Gaspar, Hangzhou Normal University, China
Assaf Harel, Wright State University, USA

Copyright © 2015 Van Den Boomen, Fahrenfort, Snijders and Kemner. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.


BRIEF RESEARCH REPORT article

Jing-Shan Huang 1 , Yang Li 1 , Bin-Qiang Chen 1* , Chuang Lin 2* and Bin Yao 1
  • 1 School of Aerospace Engineering, Xiamen University, Xiamen, China
  • 2 Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China

The classification of electroencephalogram (EEG) signals is of significant importance in brain𠄼omputer interface (BCI) systems. Aiming to achieve intelligent classification of EEG types with high accuracy, a classification methodology using sparse representation (SR) and fast compression residual convolutional neural networks (FCRes-CNNs) is proposed. In the proposed methodology, EEG waveforms of classes 1 and 2 are segmented into subsignals, and 140 experimental samples were achieved for each type of EEG signal. The common spatial patterns algorithm is used to obtain the features of the EEG signal. Subsequently, the redundant dictionary with sparse representation is constructed based on these features. Finally, the samples of the EEG types were imported into the FCRes-CNN model having fast down-sampling module and residual block structural units to be identified and classified. The datasets from BCI Competition 2005 (dataset IVa) and BCI Competition 2003 (dataset III) were used to test the performance of the proposed deep learning classifier. The classification experiments show that the recognition averaged accuracy of the proposed method is 98.82%. The experimental results show that the classification method provides better classification performance compared with sparse representation classification (SRC) method. The method can be applied successfully to BCI systems where the amount of data is large due to daily recording.


A new denoising method for fMRI based on weighted three-dimensional wavelet transform

This study presents a new three-dimensional discrete wavelet transform (3D-DWT)-based denoising method for functional magnetic resonance images (fMRI). This method is called weighted three-dimensional discrete wavelet transform (w-3D-DWT), and it is based on the principle of weighting the volume subbands which are obtained by 3D-DWT. Briefly, classical DWT denoising consists of wavelet decomposition, thresholding, and image reconstruction steps. In the thresholding algorithm, the thresholding value for each image cannot be chosen exclusively. Namely, a specific thresholding value is chosen and it is used for all images. The proposed algorithm in this study can be considered as a data-driven denoising model for fMRI. It consists of three-dimensional wavelet decomposition, subband weighting, and image reconstruction. The purposes of subband weighting algorithm are to increase the effect of the subband which represents the image better and to decrease the effect of the subband which represents the image in the worst way and thus to reduce the noises of the image adaptively. fMRI is one of the popular methods used to understand brain functions which are often corrupted by noises from various sources. The traditional denoising method used in fMRI is smoothing images with a Gaussian kernel. This study suggests an adaptive approach for fMRI filtering different from Gaussian smoothing and 3D-DWT thresholding. In this study, w-3D-DWT denoising results were evaluated with mean-square error (MSE), peak signal/noise ratio (PSNR), and structural similarity (SSIM) metrics, and the results were compared with Gaussian smoothing and 3D-DWT thresholding methods. According to this comparison, w-3D-DWT gave low-MSE and high-PSNR results for fMRI data.

This is a preview of subscription content, access via your institution.


Abstract

In numerous signal processing applications, non-stationary signals should be segmented to piece-wise stationary epochs before being further analyzed. In this article, an enhanced segmentation method based on fractal dimension (FD) and evolutionary algorithms (EAs) for non-stationary signals, such as electroencephalogram (EEG), magnetoencephalogram (MEG) and electromyogram (EMG), is proposed. In the proposed approach, discrete wavelet transform (DWT) decomposes the signal into orthonormal time series with different frequency bands. Then, the FD of the decomposed signal is calculated within two sliding windows. The accuracy of the segmentation method depends on these parameters of FD. In this study, four EAs are used to increase the accuracy of segmentation method and choose acceptable parameters of the FD. These include particle swarm optimization (PSO), new PSO (NPSO), PSO with mutation, and bee colony optimization (BCO). The suggested methods are compared with other most popular approaches (improved nonlinear energy operator (INLEO), wavelet generalized likelihood ratio (WGLR), and Varri’s method) using synthetic signals, real EEG data, and the difference in the received photons of galactic objects. The results demonstrate the absolute superiority of the suggested approach.


Segmentation of design protocol using EEG

Design protocol data analysis methods form a well-known set of techniques used by design researchers to further understand the conceptual design process. Verbal protocols are a popular technique used to analyze design activities. However, verbal protocols are known to have some limitations. A recurring problem in design protocol analysis is to segment and code protocol data into logical and semantic units. This is usually a manual step and little work has been done on fully automated segmentation techniques. Physiological signals such as electroencephalograms (EEG) can provide assistance in solving this problem. Such problems are typical inverse problems that occur in the line of research. A thought process needs to be reconstructed from its output, an EEG signal. We propose an EEG-based method for design protocol coding and segmentation. We provide experimental validation of our methods and compare manual segmentation by domain experts to algorithmic segmentation using EEG. The best performing automated segmentation method (when manual segmentation is the baseline) is found to have an average deviation from manual segmentations of 2 s. Furthermore, EEG-based segmentation can identify cognitive structures that simple observation of design protocols cannot. EEG-based segmentation does not replace complex domain expert segmentation but rather complements it. Techniques such as verbal protocols are known to fail in some circumstances. EEG-based segmentation has the added feature that it is fully automated and can be readily integrated in engineering systems and subsystems. It is effectively a window into the mind.


  1. ^ Fieldtrip is an open source Matlab toolbox for EEG and MEG analysis (Oostenveld et al., 2011).
  2. ^ Development of the MacBrain Face Stimulus Set was overseen by Nim Tottenham and supported by the John D. and Catherine T. MacArthur Foundation Research Network on Early Experience and Brain Development. Please contact Nim Tottenham at [email protected] for information concerning the stimulus set.

Ahissar, M., and Hochstein, S. (2004). The reverse hierarchy theory of visual perceptual learning. Trends Cogn. Sci. Regul. Ed. 8, 457�. doi: 10.1016/j.tics.2004.08.011

Ahissar, M., Nahum, M., Nelken, I., and Hochstein, S. (2009). Reverse hierarchies and sensory learning. Philos. Trans. R Soc. Lond. B Biol. Sci. 364, 285�. doi: 10.1098/rstb.2008.0253

Allison, T., Puce, A., Spencer, D. D., and McCarthy, G. (1999). Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cereb. Cortex 9, 415�. doi: 10.1093/cercor/9.5.415

Appelbaum, L. G., Wade, A. R., Vildavski, V. Y., Pettet, M. W., and Norcia, A. M. (2006). Cue-invariant networks for figure and background processing in human visual cortex. J. Neurosci. 26, 11695�. doi: 10.1523/JNEUROSCI.2741-06.2006

Bach, M., and Meigen, T. (1992). Electrophysiological correlates of texture segregation in the human visual evoked potential. Vision Res. 32, 417�. doi: 10.1016/0042-6989(92)90233-9

Bach, M., and Meigen, T. (1998). Electrophysiological correlates of human texture segregation, an overview. Doc. Ophthalmol. 95, 335�. doi: 10.1023/A:1001864625557

Baker, C. L., and Mareschal, I. (2001). Processing of second-order stimuli in the visual cortex. Prog. Brain Res. 134, 171�. doi: 10.1016/S0079-6123(01)34013-X

Caputo, G., and Casco, C. (1999). A visual evoked potential correlate of global figure-ground segmentation. Vision Res. 39, 1597�. doi: 10.1016/S0042-6989(98)00270-3

Censor, N., Bonneh, Y., Arieli, A., and Sagi, D. (2009). Early-vision brain response which predict human visual segmentation and learning. J. Vis. 9, 12.1�.9. doi: 10.1167/9.4.12

Eimer, M. (2000a). Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clin. Neurophysiol. 111, 694�. doi: 10.1016/S1388-2457(99)00285-0

Eimer, M. (2000b). The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport 11, 2319�. doi: 10.1097/00001756-200007140-00050

Epstein, R., and Kanwisher, N. (1998). A cortical representation of the local visual environment. Nature 392, 598�. doi: 10.1038/33402

Fahrenfort, J. J., Scholte, H. S., and Lamme, V. A. F. (2007). Masking disrupts reentrant processing in human visual cortex. J. Cogn. Neurosci. 19, 1488�. doi: 10.1162/jocn.2007.19.9.1488

Fahrenfort, J. J., Snijders, T. M., Heinen, K., van Gaal, S., Scholte, H. S., and Lamme, V. A. F. (2012). Neuronal integration in visual cortex elevates face category tuning to conscious face perception. Proc. Natl. Acad. Sci. U.S.A. 109, 21504�. doi: 10.1073/pnas.1207414110

Gauthier, I., Tarr, M., Anderson, A. W., Skudlarski, P., and Gore, J. C. (1999). Activation of the middle fusiform � area” increases with expertise in recognizing novel objects. Nat. Neurosci. 2, 568�. doi: 10.1038/9224

Gervan, P., Gombos, F., and Kovacs, I. (2012). Perceptual learning in Williams syndrome: looking beyond averages. PLoS ONE 7:e40282. doi: 10.1371/journal.pone.0040282

Golarai, G., Grill-Spector, K., and Reiss, A. L. (2006). Autism and the development of face processing. Clin. Neurosci. Res. 6, 145�. doi: 10.1016/j.cnr.2006.08.001

Gratton, G., Coles, M. G. H., and Donchin, E. (1983). A new method for off-line removal of ocular artifact. Electroencephalogr. Clin. Neurophysiol. 55, 468�. doi: 10.1016/0013-4694(83)90135-9

Grill-Spector, K., and Kanwisher, N. (2005). Visual recognition: as soon as you know it is there, you know what it is. Psychol. Sci. 16, 152�. doi: 10.1111/j.0956-7976.2005.00796.x

Grill-Spector, K., Kushnir, T., Hendler, T., and Malach, R. (2000). The dynamics of object-selective activation correlate with recognition performance in humans. Nat. Neurosci. 3, 837�. doi: 10.1038/77754

Hochstein, S., and Ahissar, M. (2002). View from the top: hierarchies and reverse hierarchies in the visual system. Neuron 36, 791�. doi: 10.1016/S0896-6273(02)01091-7

Itier, R. J., and Taylor, M. J. (2004). Source analysis of the N170 to faces and objects. Neuroreport 15, 1261�. doi: 10.1097/01.wnr.0000127827.73576.d8

Kanwisher, N., McDermott, J., and Chun, M. M. (1997). The fusiform face area: a module in human extrastriate cortex specialized for face perception. J. Neurosci. 17, 4302�.

Koivisto, M., Railo, H., Revonsuo, A., Vanni, S., and Salminen-Vaparanta, N. (2011). Recurrent processing in V1/V2 contributes to categorization of natural scenes. J. Neurosci. 31, 2488�. doi: 10.1523/JNEUROSCI.3074-10.2011

Kovผs, G., Vogels, R., and Orban, G. A. (1995). Selectivity of macaque inferior temporal neurons for partially occluded shapes. J. Neurosci. 15, 1984�.

Lamme, V. A. (1995). The neurophysiology of figure-ground segregation in primary visual cortex. J. Neurosci. 15, 1605�.

Lamme, V. A., and Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends Neurosci. 23, 571�. doi: 10.1016/S0166-2236(00)01657-X

Lamme, V. A., Supèr, H., and Spekreijse, H. (1998). Feedforward, horizontal, and feedback processing in the visual cortex. Curr. Opin. Neurobiol. 8, 529�. doi: 10.1016/S0959-4388(98)80042-1

Lamme, V. A., Van Dijk, B. W., and Spekreijse, H. (1992). Texture segregation is processed by primary visual cortex in man and monkey. Evidence from VEP experiments. Vision Res. 32, 797�. doi: 10.1016/0042-6989(92)90022-B

Liu, H., Agam, Y., Madsen, J. R., and Kreiman, G. (2009). Timing, timing, timing: fast decoding of object information from intracranial field potentials in human visual cortex. Neuron 62, 281�. doi: 10.1016/j.neuron.2009.02.025

Liu, J., Harris, A., and Kanwisher, N. (2002). Stages of processing in face perception: an MEG study. Nat. Neurosci. 5, 910�. doi: 10.1038/nn909

Mack, M. L., Gauthier, I., Sadr, J., and Palmeri, T. J. (2008). Object detection and basic-level categorization: sometimes you know it is there before you know what it is. Psychon. Bull. Rev. 15, 28�. doi: 10.3758/PBR.15.1.28

Malach, R., Reppas, J. B., Benson, R. R., Kwong, K. K., Jiang, H., Kennedy, W. A., et al. (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proc. Natl. Acad. Sci. U.S.A. 92, 8135�. doi: 10.1073/pnas.92.18.8135

Maris, E., and Oostenveld, R. (2007). Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 164, 177�. doi: 10.1016/j.jneumeth.2007.03.024

McKone, E., and Robbins, R. (2011). 𠇊re faces special?,” in Oxfort Handbook of Face Perception, eds A. Calder, G. Rhodes, M. Jonshon, and J. Haxby (New York: Oxford University Press Inc.).

Meeren, H. K. M., Hadjikhani, N., Ahlfors, S. P., Hämäläinen, M. S., and De Gelder, B. (2008). Early category-specific cortical activation revealed by visual stimulus inversion. PLoS ONE 3:e3503. doi: 10.1371/journal.pone.0003503

Mitsudo, T., Kamio, Y., Goto, Y., Nakashima, T., and Tobimatsu, S. (2011). Neural responses in the occipital cortex to unrecognizable faces. Clin. Neurophysiol. 122, 708�. doi: 10.1016/j.clinph.2010.10.004

Moutoussis, K., and Zeki, S. (2002). The relationship between cortical activation and perception investigated with invisible stimuli. Proc. Natl. Acad. Sci. U.S.A. 99, 9527�. doi: 10.1073/pnas.142305699

Oostenveld, R., Fries, P., Maris, E., and Schoffelen, J.-M. (2011). FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. 2011, 1𠄹. doi: 10.1155/2011/156869

Peterson, M. (1993). Shape recognition inputs to figure-ground organization in three-dimensional grounds. Cognit. Psychol. 25, 383�. doi: 10.1006/cogp.1993.1010

Peterson, M. (1994). Object recognition contributions to figure-ground organization: operations on outlines and subjective contours. Percept. Psychophys. 56, 551�. doi: 10.3758/BF03206951

Riesenhuber, M., and Poggio, T. (2000). Models of object recognition. Nat. Neurosci. 3(Suppl.), 1199�. doi: 10.1038/81479

Rossion, B., and Jacques, C. (2008). Does physical interstimulus variance account for early electrophysiological face sensitive responses in the human brain? Ten lessons on the N170. Neuroimage 39, 1959�. doi: 10.1016/j.neuroimage.2007.10.011

Rossion, B., and Jacques, C. (2012). “The N170: understanding the time course of face perception in the human brain,” in The Oxford Handbook of Event-Related Potential Components, eds S. J. Luck and E. S. Kappenman (Oxford: Oxford University Press). 115�.

Rossion, B., Joyce, C. A., Cottrell, G. W., and Tarr, M. J. (2003). Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. Neuroimage 20, 1609�. doi: 10.1016/j.neuroimage.2003.07.010

Rousselet, G. A., Gaspar, C. M., Pernet, C. R., Husk, J. S., Bennett, P. J., and Sekuler, A. B. (2010). Healthy aging delays scalp EEG sensitivity to noise in a face discrimination task. Front. Psychol. 1:19. doi: 10.3389/fpsyg.2010.00019

Rubin, E. (1915/1958). 𠇏igure and ground,” in Readings in Perception, eds D. C. Beardslee and M. Wertheimer (Princeton, NJ: Van Nostrand). 194�. (Original work published 1915).

Sagiv, N., and Bentin, S. (2001). Structural encoding of human and schematic faces: holistic and part-based processes. J. Cogn. Neurosci. 13, 937�. doi: 10.1162/089892901753165854

Scholte, H. S., Jolij, J., Fahrenfort, J. J., and Lamme, V. A. F. (2008). Feedforward and recurrent processing in scene segmentation: electroencephalography and functional magnetic resonance imaging. J. Cogn. Neurosci. 20, 2097�. doi: 10.1162/jocn.2008.20142

Serre, T., Oliva, A., and Poggio, T. (2007). A feedforward architecture accounts for rapid categorization. Proc. Natl. Acad. Sci. U.S.A. 104, 6424�. doi: 10.1073/pnas.0700622104

Snijders, T. M., Kooijman, V., Cutier, A., and Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Res. 1178, 106�. doi: 10.1016/j.brainres.2007.07.080

Thierry, G., Martin, C. D., Downing, P., and Pegna, A. J. (2007). Controlling for interstimulus perceptual variance abolishes N170 face selectivity. Nat. Neurosci. 10, 505�. doi: 10.1038/nn1864

Thorpe, S., Fize, D., and Marlot, C. (1996). Speed of processing in the human visual system. Nature 381, 520�. doi: 10.1038/381520a0

Tong, F., Nakayama, K., Vaughan, J. T., and Kanwisher, N. (1998). Binocular rivalry and visual awareness in human extrastriate cortex. Neuron 21, 753�. doi: 10.1016/S0896-6273(00)80592-9

Vecera, S. P., and O’Reilly, R. C. (1998). Figure-ground organization and object recognition processes: an interactive account. J. Exp. Psychol. Hum. Percept. Perform. 24, 441�. doi: 10.1037/0096-1523.24.2.441

Wolfe, J. M. (1983). Influence of spatial frequency, luminance, and duration on binocular rivalry and abnormal fusion of briefly presented dichoptic stimuli. Perception 12, 447�. doi: 10.1068/p120447

World Medical Association. (2013). World medical association declaration of helsinki: ethical principles for medical research involving human subjects. JAMA 310, 2191�. doi:10.1001/jama.2013.281053

Wyatte, D., Curran, T., and O’Reilly, R. (2012). The limits of feedforward vision: recurrent processing promotes robust object recognition when objects are degraded. J. Cogn. Neurosci. 24, 2248�. doi: 10.1162/jocn_a_00282

Yantis, S., and Serences, J. T. (2003). Cortical mechanisms of space-based and object-based attentional control. Curr. Opin. Neurobiol. 13, 187�. doi: 10.1016/S0959-4388(03)00033-3

Zipser, K., Lamme, V. A., and Schiller, P. H. (1996). Contextual modulation in primary visual cortex. J. Neurosci. 16, 7376�.

Keywords : EEG, face processing, visual system, low-level vision, high-level vision, categorization

Citation: Van Den Boomen C, Fahrenfort JJ, Snijders TM and Kemner C (2015) Segmentation precedes face categorization under suboptimal conditions. Front. Psychol. 6:667. doi: 10.3389/fpsyg.2015.00667

Received: 24 February 2015 Accepted: 07 May 2015
Published online: 26 May 2015.

Carl M. Gaspar, Hangzhou Normal University, China
Assaf Harel, Wright State University, USA

Copyright © 2015 Van Den Boomen, Fahrenfort, Snijders and Kemner. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.


EEG assessment of brain activity: Spatial aspects, segmentation and imaging

High temporal resolution and sensitivity to index different functional brain states makes the EEG a powerful tool in psychophysiology. Its full potential can now be utilized since recording technology and computational power for the large data masses has become affordable. However, basic traditional strategies in EEG need reviewing.

Conventional, spontaneous or evoked EEG traces which are used for various complex analyses give ambiguous information on EEG power (amplitude) and phase for a given point on the scalp. Principally, analysis should first be done over space, then over time, to avoid ambiguities or pre-selections. First or second spatial derivative computations can provide “reference-free” data for analyses over time. We propose to use direct, spatial approaches for the analysis of the scalp EEG field distributions when simultaneous recording in several EEG channels can be examined.

The ambiguity of the conventional EEG waveshapes results in different, equally “correct” scalp maps of EEG power of the same multichannel data for different reference electrodes. An exeption are scalp maps of EEG power computed against the common, average reference, as they are related to the reference-free spatial distribution (maps) of the maximal and minimal (extreme) field values over time, and thus are directly interpretable in terms of net orientation of the generator process.

A proposed, reference-free EEG segmentation into epochs of periodically stationary spatial distributions of the mapped scalp EEG fields uses the locations of maximal and minimal (extreme) field values at each moment in time as classifiers, and thus avoids the priviledging of two arbitrarily chosen recording points in the field.


BRIEF RESEARCH REPORT article

Jing-Shan Huang 1 , Yang Li 1 , Bin-Qiang Chen 1* , Chuang Lin 2* and Bin Yao 1
  • 1 School of Aerospace Engineering, Xiamen University, Xiamen, China
  • 2 Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China

The classification of electroencephalogram (EEG) signals is of significant importance in brain𠄼omputer interface (BCI) systems. Aiming to achieve intelligent classification of EEG types with high accuracy, a classification methodology using sparse representation (SR) and fast compression residual convolutional neural networks (FCRes-CNNs) is proposed. In the proposed methodology, EEG waveforms of classes 1 and 2 are segmented into subsignals, and 140 experimental samples were achieved for each type of EEG signal. The common spatial patterns algorithm is used to obtain the features of the EEG signal. Subsequently, the redundant dictionary with sparse representation is constructed based on these features. Finally, the samples of the EEG types were imported into the FCRes-CNN model having fast down-sampling module and residual block structural units to be identified and classified. The datasets from BCI Competition 2005 (dataset IVa) and BCI Competition 2003 (dataset III) were used to test the performance of the proposed deep learning classifier. The classification experiments show that the recognition averaged accuracy of the proposed method is 98.82%. The experimental results show that the classification method provides better classification performance compared with sparse representation classification (SRC) method. The method can be applied successfully to BCI systems where the amount of data is large due to daily recording.


Conclusion

The current study shows that although there is increasing evidence that categorization often results from feedforward processing (Thorpe et al., 1996 Riesenhuber and Poggio, 2000 Liu et al., 2002 Serre et al., 2007), while segmentation requires recurrent processing (Lamme and Roelfsema, 2000 Appelbaum et al., 2006 Fahrenfort et al., 2012), segmentation nevertheless precedes category-selective responses when objects lack low-level image properties to aid in fast categorization. Our results increase the understanding of the inter-relation between segmentation and categorization, and the speed of category-selective responses under varying circumstances.


A new denoising method for fMRI based on weighted three-dimensional wavelet transform

This study presents a new three-dimensional discrete wavelet transform (3D-DWT)-based denoising method for functional magnetic resonance images (fMRI). This method is called weighted three-dimensional discrete wavelet transform (w-3D-DWT), and it is based on the principle of weighting the volume subbands which are obtained by 3D-DWT. Briefly, classical DWT denoising consists of wavelet decomposition, thresholding, and image reconstruction steps. In the thresholding algorithm, the thresholding value for each image cannot be chosen exclusively. Namely, a specific thresholding value is chosen and it is used for all images. The proposed algorithm in this study can be considered as a data-driven denoising model for fMRI. It consists of three-dimensional wavelet decomposition, subband weighting, and image reconstruction. The purposes of subband weighting algorithm are to increase the effect of the subband which represents the image better and to decrease the effect of the subband which represents the image in the worst way and thus to reduce the noises of the image adaptively. fMRI is one of the popular methods used to understand brain functions which are often corrupted by noises from various sources. The traditional denoising method used in fMRI is smoothing images with a Gaussian kernel. This study suggests an adaptive approach for fMRI filtering different from Gaussian smoothing and 3D-DWT thresholding. In this study, w-3D-DWT denoising results were evaluated with mean-square error (MSE), peak signal/noise ratio (PSNR), and structural similarity (SSIM) metrics, and the results were compared with Gaussian smoothing and 3D-DWT thresholding methods. According to this comparison, w-3D-DWT gave low-MSE and high-PSNR results for fMRI data.

This is a preview of subscription content, access via your institution.


Introduction

Humans and other foveate animals – such as monkeys and birds of prey – visually scan scenes with a characteristic fixate-saccade-fixate pattern: periods of relative stability are interspersed with rapid shifts of gaze. During “fixation” the visual axis (and high-resolution foveola) is directed to an object or location of interest. For humans, the duration of the periods of stability is on the order of 0.2–0.3 s, depending on a number of factors such as task and stimulus complexity. The typical duration of saccadic eye movements in on the order of 0.01–0.1 s, depending systematically on the amplitude of the movement 1 .

If the scene contains moving target objects, or when the observer is moving through it, then stabilization of gaze on a focal object or location requires a “tracking fixation”, i.e. a smooth pursuit eye movement. Here the eye rotates to keep gaze fixed on the target. Also, when the observer’s head is bouncing due to locomotion or external perturbations, gaze stabilization involves vestibulo-ocular and optokinetic compensatory eye movements. In natural behavior, all the eye movement “types” mentioned above are usually simultaneously present, and cannot necessarily be differentiated from one another in terms of oculomotor properties or underlying neurophysiology 2,3,4,5 .

It is possible to more or less clearly experimentally isolate each of the aforementioned “types” in experiments that tightly physically constrain the visible stimuli and the patterns of movement the subject is allowed to make. Much of what we know about oculomotor control circuits is based on such laboratory experiments where the participant’s head is fixed with a chin rest or a bite bar, and the stimulus and task are restricted so as to elicit only a specific eye movement type. In order to understand how gaze control is used in natural behavior, however, it is essential to be able to meaningfully compare oculomotor behavior observed in constrained laboratory recordings to gaze recordings “in the wild” 2,5,6,7 .

Laboratory grade systems typically have very high accuracy and very low noise levels. Sampling frequencies may range from 500 to as high as 2000 Hz. As the subject’s behavior is restricted, it is possible to tailor custom event identification methods that rely on only the eye movement type of interest being present in the data (and would produce spurious results with data from free eye movement behavior). On the other hand, mobile measuring equipment has much lower accuracy and relatively high levels of noise, with sampling frequency typically between 30 and 120 Hz. The subject’s behavior is complex, calling for robust event identification that works when all eye movement types are simultaneously present. Unfortunately, these different requirements have led, and increasingly threaten to lead, the methodologies and concepts of “laboratory” and “naturalistic” research into diverging directions. For wider generalizability of results, it would be desirable to analyze eye movement events in a similar way across task settings, by using event detection methods that do not rely on restrictions or assumptions which are not valid for most natural behavior.

Here, we introduce Naive Segmented Linear Regression (NSLR), a new method for eye-movement signal denoising and segmentation, and a related event classification method based on Hidden Markov Models (NSLR-HMM). The approach is novel in that it differs in concept from the traditional workflow of pre-filtering, event detection and segmentation. Instead, it integrates denoising into segmentation which is now the first – rather than the last – step in the analysis, and then performs classification on the denoised segments (rather than sample-to-sample). The method is general in two ways: Firstly, it performs a four-way identification of fixations, saccades, smooth pursuits and post-saccadic oscillations, which allows for experiments with complex gaze behavior. Secondly, it can be directly applied to noisy data to recover robust gaze position and velocity estimates, which means it can be used on both high-quality lab data and more challenging mobile data on natural gaze behavior. The method also automatically estimates the signal’s noise level and determines gaze feature parameters from human classification examples in a data-driven manner, requiring minimal manual parameter setting.

We believe this is an important development direction for eye movement signal analysis as this can help counteract the historical tendency in the field of eye tracking to develop operational definitions of eye movement “types” that are based on very specific and restrictive oculomotor tasks and event identification methods tailor-made for them (and then “reify” the types as separate phenomena). In contrast, our method has a number of desirable features that compare favorably with the state of the art and will in part help harmonize the traditional oculomotor and more naturalistic gaze behavior research traditions:

The NSLR method is based on a few simple and intuitively transparent basic concepts.

It requires no signal preprocessing (e.g. filtering, as denoising is inherent into the segmentation step), and no user-defined filtering parameters.

Segmentation is conceptually parsimonious and uses only a few parameters (that can be estimated from the data itself).

No “ground truth” training data from human annotators is necessary for segmentation. (Human coding data is needed for classification, which is treated as a separate subproblem).

The HMM classifier can identify four types of eye movement (saccade, PSO, fixation, pursuit).

This classification uses global signal information. (It is not based on sample-wise application simple criteria such as duration or velocity thresholds).

Because of its wide range of oculomotor event identification and powerful denoising performance it can be used for both low-noise laboratory data in tasks that only elicit one or two types of oculomotor events and high-noise field data collected during complex behavior. This is desirable for harmonizing the gaze behavior (in the wild) and oculomotor event identification (in the laboratory) perspectives on eye movement behavior.

Full C++ and Python implementation of the method is available under an open source license at https://gitlab.com/nslr/.


Consumer neuroscience: Assessing the brain response to marketing stimuli using electroencephalogram (EEG) and eye tracking

Application of neuroscience methods to analyze and understand human behavior related to markets and marketing exchange has recently gained research attention. The basic aim is to guide design and presentation of products to optimize them to be as compatible as possible with consumer preferences. This paper investigates physiological decision processes while participants undertook a choice task designed to elicit preferences for a product. The task required participants to choose their preferred crackers described by shape (square, triangle, round), flavor (wheat, dark rye, plain) and topping (salt, poppy, no topping). The two main research objectives were (1) to observe and evaluate the cortical activity of the different brain regions and the interdependencies among the Electroencephalogram (EEG) signals from these regions and (2) unlike most research in this area that has focused mainly on liking/disliking certain products, we provide a way to quantify the importance of different cracker features that contribute to the product design based on mutual information. We used the commercial Emotiv EPOC wireless EEG headset with 14 channels to collect EEG signals from participants. We also used a Tobii-Studio eye tracker system to relate the EEG data to the specific choice options (crackers). Subjects were shown 57 choice sets each choice set described three choice options (crackers). The patterns of cortical activity were obtained in the five principal frequency bands, Delta (0–4 Hz), Theta (3–7 Hz), Alpha (8–12 Hz), Beta (13–30 Hz), and Gamma (30–40 Hz). There was a clear phase synchronization between the left and right frontal and occipital regions indicating interhemispheric communications during the chosen task for the 18 participants. Results also indicated that there was a clear and significant change (p < 0.01) in the EEG power spectral activities taking a place mainly in the frontal (delta, alpha and beta across F3, F4, FC5 and FC6), temporal (alpha, beta, gamma across T7), and occipital (theta, alpha, and beta across O1) regions when participants indicated their preferences for their preferred crackers. Additionally, our mutual information analysis indicated that the various cracker flavors and toppings of the crackers were more important factors affecting the buying decision than the shapes of the crackers.

Highlights

► This paper investigates physiological decision processes during decision making. ► The task required participants to choose their preferred crackers (shape, flavour and topping). ► We observe and evaluate the cortical activity of the different brain regions. ► We quantify the importance of different cracker features using mutual information. ► A clear phase synchronization observed between left and right frontal and occipital regions.


Watch the video: Ηλεκτροεγκεφαλογράφημα - Nasios Neurocare (June 2022).


Comments:

  1. Aldin

    I must tell you this is a false path.

  2. Dulrajas

    Both all?

  3. Shakree

    the Incomparable answer)

  4. Kibei

    Correct thought



Write a message