In psychophysics, why are log luminance rather than absolute luminance values reported?

In psychophysics, why are log luminance rather than absolute luminance values reported?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Are there any papers which justify converting into log luminance? For example papers showing humans being sensitive to changes in log luminance rather than luminance per se?

In general, subjective sensation increases linearly with the the log of physical intensity, which is described by Fechner's law.

We are sensitive to small variations when light is dim, but we need large differences in intensity under conditions of high luminance (Weber's law). This is a linear relation, but taken together with Fechner these findings are described as the widely-known Weber-Fechner's law, justifying the log scales used.

Another well-known example is the decibel scale in hearing, which is also a log scale.

Any basic textbook on perception or psychophysics will explain this and a reference is added below.

- Masin et al., J History Behav Sci (2009); 45(1): 56-65


Study species

We used adult triggerfish Rhinecanthus aculeatus (Linnaeus 1758) (n=15) of unknown sex, which ranged in size from 6 to 16 cm standard length (SL). This species inhabits shallow tropical reefs and temperate habitats throughout the Indo-Pacific and feeds on algae, detritus and invertebrates (Randall et al., 1997). They are relatively easy to train for behavioural studies (e.g. Green et al., 2018), and their visual system has been well studied, including their colour vision capabilities (Champ et al., 2016 Cheney et al., 2013 Pignatelli et al., 2010), neuroanatomy (Pignatelli and Marshall, 2010) and spatial vision (Champ et al., 2014). They have trichromatic vision based on a single cone, containing short wavelength-sensitive visual pigment (SW photoreceptor λmax=413 nm), and a double cone, which houses the medium wavelength-sensitive pigment (MW photoreceptor λmax=480 nm) and long wavelength-sensitive pigment (LW photoreceptor λmax=528 nm) (Cheney et al., 2013). The double cone members are used independently in colour vision (Pignatelli et al., 2010), but are also thought to be used in luminance vision (Marshall et al., 2003 Siebeck et al., 2014), as per other animals such as birds and lizards (Lythgoe, 1979). However, it is not clear whether both members of the double cone are used for luminance perception via electrophysiological coupling (Marchiafava, 1985 Siebeck et al., 2014).

We based the current study on the assumption that both members of the double cone contribute to luminance perception, as per previous studies that have modelled luminance perception in R. aculeatus (Mitchell et al., 2017 Newport et al., 2017). These studies used the added input of both double cone members (MW+LW), whereas our study uses the averaged output of both members [(MW+LW)/2] as suggested by Pignatelli and Marshall (2010) and Pignatelli et al. (2010). Additionally, Cheney et al. (2013) used the LW receptor response rather than both double cone members for luminance contrast modelling in R. aculeatus, based on discussions in Marshall et al. (2003). However, Michelson contrast/Weber contrast/ΔS contrast values are identical for ft/b=MW+LW and ft/b=(MW+LW)/2 (where ft/b describes the relative luminance contrast between the target and the background see Eqn 2 below). Using the LW member of the double cone only (as opposed to both members) causes less than 1% difference (well below measurement error) in receptor stimulation because of the lack of chromaticity of the stimuli and the strong overlap of spectral sensitivities of the two double cone members (Cheney et al., 2013). Fish were obtained from an aquarium supplier (Cairns Marine Pty Ltd, Cairns, QLD, Australia), shipped to The University of Queensland, Brisbane, and housed in individual 120 l tanks (40×80×40 cm W×L×H). The fish were kept at 25°C, pH 8.2, at a salinity of 1.025 g cm −3 and fed twice daily with a mix of frozen shrimp and squid. Seawater was prepared using aged water mixed with marine salt. Between trials and training, lighting was provided with white fluorescent light on a 12 h (06:00 h–18:00 h) cycle. Fish were acclimatised for at least 1 week before training commenced. Experiments were conducted in September–November 2017. All experimental procedures for this study were approved by the University of Queensland Animal Ethics Committee (SBS/111/14/ARC).

Stimulus creation and calibration

We used a custom program in Matlab (MathWorks, Natick, MA, USA) to create the stimuli (available on GitHub, This program allowed us to specify the RGB values of the background and target spot, and randomly allocate the target spot (1.6 cm diameter) to a position on the background. The size of the spot was chosen to be well within the spatial acuity of R. aculeatus (Champ et al., 2014) and could be easily resolved by the fish from anywhere in their aquaria. Stimuli, distractors and backgrounds were printed on TrendWhite (Steinbeis Papier GmbH, Steinberg, Germany) ISO 80 A4 recycled paper using a HP Laserjet Pro 400 colour M451dn printer (Hewlett-Packard, Palo Alto, CA, USA). Stimuli were then laminated using 80 μm matte laminating pouches (GBC, Chicago, IL, USA). Throughout the experiment, any stimuli with detectable scratches or damage were replaced immediately.

To ensure all stimuli were achromatic, reflectance measurements were plotted in colour space as per Champ et al. (2016) and Cheney et al. (2019). Target and background colours were <1ΔS from the achromatic locus in the RNL colour space as per eqns 1–4 in Hempel de Ibarra et al. (2001). Photoreceptor stimulation was calculated using spectral sensitivities of triggerfish from Cheney et al. (2013). Measures of photoreceptor noise are not available in this species therefore, we assumed a cone ratio of 1:2:2 (SW:MW:LW) with a standard deviation of noise in a single cone of 0.05 as per Champ et al. (2016) and Cheney et al. (2019). The cone abundance was normalised relative to the LW cone, which resulted in channel noise levels (univariant Weber fractions) of 0.07:0.05:0.05 (SW:MW:LW).

We quantified luminance contrast using calibrated digital photography (Stevens et al., 2007) using an Olympus E-PL5 Penlight camera fitted with a 60 mm macro lens (Fig. S1) to take pictures of each stimulus combination outside of the water. Two EcoLight KR96 30 W white LED lights (Eco-lamps Inc., Hong Kong) were used to provide even illumination between 400 and 700 nm wavelength (Fig. S2). Pictures were analysed using the ‘Multispectral Image Calibration and Analysis’ (MICA) Toolbox (Troscianko and Stevens, 2015) to calculate cone capture quanta of the double cone. The double cone stimulation was calculated as the average stimulation of the medium wavelength (MW) and long wavelength (LW) cone, as per Pignatelli et al. (2010). We used a spatial acuity estimation of 2.75 cycles per degree (Champ et al., 2014) at 15 cm viewing distance using AcuityView (Caves and Johnsen, 2018) implemented in MICA's QCPA package (van den Berg et al., 2020).

Schematic representation of detection scenarios. (A) Group 1 (dark background). (B) Group 2 (bright background). Note, figure proportions are not to scale. Stimuli are shown with the maximum contrast used in the experiment. Tbd, bright spot on a dark background Tdd, dark spot on a dark background Tbb, bright spot on a bright background Tdb, dark spot on a bright background. Backgrounds were A4 size and the spots were 1.6 cm in diameter, randomly placed for each trial.

Schematic representation of detection scenarios. (A) Group 1 (dark background). (B) Group 2 (bright background). Note, figure proportions are not to scale. Stimuli are shown with the maximum contrast used in the experiment. Tbd, bright spot on a dark background Tdd, dark spot on a dark background Tbb, bright spot on a bright background Tdb, dark spot on a bright background. Backgrounds were A4 size and the spots were 1.6 cm in diameter, randomly placed for each trial.

Summary of all stimulus contrasts across both groups in ΔS and Michelson contrast

KACMXLy87LiwTxso1zRRUyvJHALwmAevLgnCjNOhtaCQJodL024A__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA" />

Experimental setup

Aquaria were divided into two halves by a removable grey, opaque PVC partition. This enabled the fish to be separated from the testing arena while the stimuli were set up. Stimuli were displayed on vertical, grey, PVC boards and placed against one end of the aquaria. Tanks were illuminated using the same white LED lights (EcoLight KR96 30 W) used for stimulus calibration. To ensure equal light levels in all tanks, sidewelling absolute irradiance was measured using a calibrated OceanOptics USB2000 spectrophotometer (Ocean Insight, Orlando, FL, USA), a 180 deg cosine corrector and a 400 nm diameter optic fibre cable fixed horizontally in the tank (Fig. S2).

Animal training

Fish were trained to peck at the target dot using a classic conditioning approach. First, fish were trained to pick a small piece of squid off a black or white (randomly chosen) spot (1.6 cm diameter) on the grey background corresponding to the treatment group (‘bright’ or ‘dark’ Table 1). To reduce possible cues for the fish, the tester was standing behind the fish during a trial so that the fish would move away from the tester when approaching a stimulus. We trained the fish to detect target spots being either brighter or darker than their background to reduce hypersensitivity due to an expected direction of stimulus contrast. This is known as an adaptation of the principle of ‘constant stimuli’ (Colman, 2008 Laming and Laming, 1992 Pelli and Bex, 2013) aimed at dissociating perceptual memories associated with a task (e.g. the stimulus being always brighter or darker than its background) and therefore preventing hypersensitivity. Training fish to react to stimuli being either brighter or darker was intended to produce thresholds more closely related to a natural context, as prey items in the natural environments can be both brighter and darker than their natural background. Second, once fish consistently removed the food reward from the black and white target spots, a second food reward was presented from above using forceps. Once fish were confident with this, the final stage of training was a food reward given from above once they had tapped at the target stimulus (without food). Training consisted of up to two sessions per day, with 6–10 trials per session. Fish moved to the testing phase when successful in performing the task in >80% trials over at least 6 consecutive sessions. A trial was considered unsuccessful if the fish took longer than 90 s to make a choice or if it pecked at the background more than twice. This criterion was chosen as the fish are sometimes distracted by small particles or reflections, which they often peck at. A fish doing this twice was a more reliable indicator of the inability to find the stimulus while being motivated enough to guess. Testing was suspended for the day if the fish showed multiple timeouts for obviously easy contrasts, with the assumption that the fish was not motivated to perform the task. However, this occurred rarely (<1% of trials), with smaller fish being more susceptible to having been fed enough to lose appetite.

Animal testing

We randomly allocated fish into two groups: group 1 (n=7) had to find and peck at target spots that were brighter (Tbd) or darker (Tdd) than a relatively dark background group 2 (n=8) had to find and peck target spots that were brighter (Tbb) or darker (Tdb) than a relatively bright background (Fig. 2, Table 1). As with the training of the animals, the target spots were presented in a random position against an A4-sized achromatic background in two sessions per day consisting of 6–10 trials per session depending on the appetite of the fish. The trials for each session were chosen pseudo-randomly from all possible contrasts (shuffling the stack of all printed stimuli and choosing a random set of stimuli) thus, fish were presented with both darker and brighter spots compared with their background in each session. Each stimulus was presented a minimum of 6 times (Table 1). We ensured that both easier and harder contrast stimuli were presented in each session to maintain fish motivation thus, if a chosen series consisted of only hardly detectable stimuli, we would manually add one or two easy ones and vice versa. While stimulus placement on the background using the Matlab script was truly random, we only ever had two or three different versions of each stimulus printed, which, because of the random selection of stimulus sequence as well as the random rotation of the printed stimuli and re-printing of damaged stimuli (placed in new random positions by the Matlab script), would result in a non-predictable, pseudo-random placement of stimuli. Motivation was considered to be low when the animal did not engage in the trial immediately, and if this occurred, trials were ceased for that fish until the next session. However, this rarely occurred and was further minimised by carefully avoiding overfeeding the animals. A trial was considered unsuccessful if the fish took longer than 90 s to make a choice or if it pecked at the background more than twice. Incorrect pecks were recorded, and time to detection was defined from when the fish swam past the divider to when they successfully pecked at the target spot.

Statistical analysis

Psychometric curves were fitted to the pooled data of each scenario with percentage correct detection per stimulus as the response variable and stimulus contrast measured in Michelson contrast as the independent variable, using the R package quickpsy (Linares and Lopez-Moliner, 2015 The best model fit (cumulative normal or logistic) was determined using the lowest AIC as per Yssaad-Fesselier and Knoblauch (2006) and Linares and Lopez-Moliner (2015) and is expressed both individually for each scenario and as the sum across all scenarios. Prior to pooling individuals for each scenario, we conducted a median absolute distance (MAD) test for outliers (Leys et al., 2013) with adjusted, moderately conservative criteria based on a Shapiro–Wilk test of normality (Royston, 1982). We interpolated the 50% correct detection thresholds with a 95% confidence interval (CI) from these curves. Thresholds between the fitted curves for each pooled scenario were compared as per Jörges et al. (2018) using the Bootstrap (Boos, 2003) implemented in quickpsy (100 permutations). The Bonferroni method (Bland and Altman, 1995) was used to adjust the significance level of the confidence intervals to 1−0.05/n, with n corresponding to the number of comparisons.

Experiment 1

Experiment 1 was designed to investigate the influence of item-background contrast on contextual learning and transfer under photopic vision conditions. To this end, the display contrast was set to high in the training session but to low in the subsequent transfer session in Experiment 1A. Conversely, in Experiment 1B, the display contrast was set to low in the training session, but set to high in the transfer session (see Fig. 1).



Two separate groups, of 30 participants each, took part in Experiments 1A and 1B (18 females, mean ages = 25.2 years and 25.4 years), respectively. All participants had normal or corrected-to-normal visual acuity. The sample size was estimated by a power analysis using G*Power (Prajapati, Dunne, & Armstrong, 2010). In a standard contextual-cueing task, the effect size is relatively large (e.g., f > .65 in Zang, Shi, Müller, & Conci, 2017). Here, we used f = .65 in the estimation, which yielded a sample size of 28 per experiment to reach a power of 90% and an α level of .05. To be more conservative, we recruited 30 participants for each experiment.

All participants gave written informed consent prior to the experiment and were paid for their participation. The study was approved by the ethics committee of the Ludwig Maximilian University of Munich (LMU) Psychology Department in accordance with the Declaration of Helsinki, and the procedures were carried out in accordance with the relevant guidelines and regulations. (This also applies to Experiments 2 and 3, reported below.)


The experiment was conducted in a dimly lit experimental cabin, with stimuli presented on a 21-inch LACIE CRT monitor (screen resolution: 1,024 × 768 pixels refresh rate: 100 Hz). The monitor brightness and contrast was set to 50%, and the cabin was lit by a ceiling lamp to a photopic level of environmental lighting (21 cd/m 2 ). The viewing distance was fixed at 57 cm, maintained with the support of a chin rest. Stimulus presentation was controlled by using Psychtoolbox (Brainard, 1997) and MATLAB codes.


The search stimuli consisted of one T-shaped target rotated 90° or 270° from the vertical (i.e., the T was oriented in either rightward-pointing or leftward-pointing direction) and 15 L-shaped distractors (randomly rotated by 0°, 90°, 180° or 270°). Similar to previous studies (Jiang & Chun, 2001 Zang et al., 2015), the L distractors had a small offset (0.15°) at the line junctions, making the Ls more similar to the target T. Each stimulus subtended 1.0° of visual angle. All search items were randomly presented on four concentric (invisible) circles with radii of 2°, 4°, 6°, and 8° of visual angle, respectively. Targets appeared only on the second or the third circle, while distractors could appear on all the four circles this item arrangement is identical to that used in previous contextual-cueing studies (Annac et al., 2013).

As illustrated in Fig. 1, the search items (T and Ls) were presented on a dark-gray background (1.76 cd/m 2 ). In Experiment 1A, the luminance of the search items was set to high (25.38 cd/m 2 ) during the training session, and to low (2.33 cd/m 2 ) during the transfer session. Conversely, in Experiment 1B, search-item luminance was low (2.33 cd/m 2 ) during the training session, but high (25.38 cd/m 2 ) during the transfer session. We calculated the display contrast (0.87 and 0.14 of the high-contrast and low-contrast displays, respectively) in terms of the Michelson contrast, which is defined as(IiIb)/(Ii + Ib), where Ii and Ib represent the luminance of the search items and the background, respectively.


Both experiments consisted of a training session of 25 blocks, followed by a transfer session of five blocks and a recognition test. Each block consisted of 16 trials, eight with “old” and eight with “new” displays, presented in randomized order. For the old displays, the locations of all the search items (both the T and the Ls) were kept constant and repeated once per block during the experiments for the new displays, by contrast, the locations and orientations of the distractors were randomly determined for each presentation. To maintain comparable repetitions of the target locations for both old and new displays, targets also appeared at eight predefined locations in the new displays. The orientation of the target (leftward pointing vs. rightward pointing) was randomly selected for each search display, whether new or old, thus preventing any RT advantage from constant target orientation in old versus randomly variable orientation in new displays.

Participants were instructed to find the target T amongst the distractors Ls and discern its orientation (leftward pointing or rightward pointing), as rapidly and accurately as possible, by pressing either the left or the right arrow key on the keyboard with their left or right index finger, respectively. Each trial started with the presentation of a central fixation cross for 800–1,000 ms, followed by a search display that remained on the screen until a response was made or until the presentation exceeded 10 seconds. The next trial started after a random intertrial interval (ITI) of 1.0–1.2 seconds.

Prior to performing the visual search task, participants practiced the task in one block of 16 trials, in which the luminance contrast of the stimuli and the ambient lighting were set to the more challenging settings used in the training or the test session (i.e., low contrast in both Experiment 1A and 1B). This was done to ensure that participants were able to actually perform the task under the more difficult conditions—the assumption being that if participants reached an accuracy greater than 75% in the more difficult practice condition, they would also be able to perform the task under the easier condition. Participants who performed worse than 75% correct practiced the task again for two or three blocks, until they reached the accuracy criterion. No participants were excluded based on the current criteria. The item configurations displayed in the practice session were not used in the later parts of the experiment. Participants were free to take a break between blocks.

After the formal visual search experiment (i.e., training and test sessions), participants received one block of the recognition test, consisting of eight old-display and eight new-display trials. Participants had to make two-alternative forced choices as to whether a given display was an “old” or a “new” one by pressing the left or the right arrow key, respectively. The ambient lighting condition and the display contrast were the same as was used during the training sessions. The same dark-adaptation procedure was adopted for the recognition test in the mesopic environment. Of note, participants were explicitly informed before the recognition test that half of the recognition displays were old while the other half were new.

Statistical analyses

Statistical testing was mainly based on analyses of variance (ANOVAs). To establish whether critical nonsignificant effects favor the null hypothesis, we calculated Bayes factors (BF) with JASP (0.11.1), using the default Cauchy settings (i.e., r-scale fixed effects = 0.5, r-scale random effects = 1, r-scale covariates = 0.354). Likewise, for Bayesian t tests, we used the default Cauchy prior (scale of 0.707). All Bayes factors reported for ANOVA main effects and interactions are “inclusion” Bayes factors calculated across matched models. Inclusion Bayes factors compare models with a particular predictor to models that exclude that predictor. That is, they indicate the amount of change from prior inclusion odds (i.e., the ratio between the total prior probability for models including a predictor and the prior probability for models that do not include it) to posterior inclusion odds. Using inclusion Bayes factors calculated across matched models means that models that contain higher-order interactions involving the predictor of interest were excluded from the set of models on which the total prior and posterior odds were based. Inclusion Bayes factors provide a measure of the extent to which the data support inclusion of a factor in the model. BF values less than 0.33 are taken to provide substantial evidence for the null hypothesis (Kass & Raftery, 1995).


Error rates

The proportion of trials with response errors and response failures (trials without response within the allowed time) were low (Experiment 1A, errors: 2.22%, failures: 0.85% Experiment 1B, errors: 3.55%, failures: 1.69%). To examine for potential speed–accuracy trade-offs, we grouped the trials into four quartile subsets according to the response times (RTs) and examined the respective (quartile-subset) error rates. This analysis revealed that most errors were made with the slowest responses (see Fig. 2). One-way repeated-measures ANOVAs confirmed this RT quartile-subset effect to be significant for each experiment, Experiment 1A, F(3, 87) = 23.79, p < .001, ηp 2 = .45 Experiment 1B, F(3, 87) = 34.80, p < .001, ηp 2 = .55. This effectively rules out a trade-off between the accuracy and speed of responses. Accordingly, trials with response errors and failures to respond were excluded from further RT analysis.

Mean error rates as a function of RT quartile subset, for each experiment. Q1, Q2, and Q3 denote the 25%, 50%, and 75% quartiles, respectively

Contextual-cueing effects

For the RT analysis, we grouped every five consecutive trial blocks (of 16 trials each) into an “epoch” (of 80 trials), yielding five task epochs for the training session and one epoch for the transfer session. Figures 3a–b show the mean RT as a function of the epoch and display context. To examine the contextual-cueing effects, for each experiment, RT performance in the training session was subjected to a repeated-measures ANOVA with the factors context (old vs. new) and epoch (1 to 5), and RT performance in the transfer session by an ANOVA with the single factor context.

Results of Experiment 1. a–b Mean RTs, with associated standard errors, for the old (open circles) and new contexts (open triangles) as a function of task epoch. Light background shading indicates that the experiment was conducted under photopic vision. In Experiment 1A, stimulus contrast was high (HC) in the training session, but low (LC) in the transfer session this was reversed in Experiment 1B. c Percentage change (negative: decrease positive: increase) of the contextual-cueing (CC) effect from the last epoch of the training session to the test session for Experiments 1A–B. The change was significant for Experiment 1A (p = .043), but only marginal significant for Experiment 1B (p = .07)

Using a standard setting (i.e., high item-background contrast in the photopic environment), the training session of Experiment 1A replicated the standard contextual cueing effect, F(1, 29) = 5.81, p = .022, ηp 2 = .17, with an overall RT facilitation of 160 ms for the old versus new displays. The effect of epoch was also significant, F(4, 116) = 22.07, p < .001, ηp 2 = .43: response speed increased across the training session (436 ms faster RTs in Epoch 5 vs. Epoch 1), indicative of procedural learning (i.e., general learning of how to perform the task). The significant Context × Epoch interaction, F(4, 116) = 2.54, p = .044, ηp 2 = .08, indicates that the contextual-cueing effect developed as the experiment progressed (see Fig. 3a).

For the subsequent transfer session of Experiment 1A, in which the item-background contrast was switched from high to low, a paired-sample t test, with context (old vs. new) as a factor, showed nonsignificant result, t(29) = 0.64, p = 0.53, Cohen’s d = 0.12, BF10 = 0.24, indicating reduced contextual cueing. Note that the mean RT was still 72 ms faster to old versus new configurations, yet due to large interparticipant variation, this difference was not significant (in fact, the Bayes factor favors the null hypothesis of no contextual cueing). To examine the transfer effect of contextual cueing, we further estimated the change of contextual cueing from the last block of the training session (i.e., Epoch 5) to the test session (i.e., Epoch 6) for each experiment. Given that the changes of the lighting and stimulus-contrast conditions between the training and test sessions had a substantial impact on general response speed, we calculated the transfer effect based on the relative contextual-cueing magnitudes (calculated by relating the mean contextual-cueing effect to the mean RT). Figure 3c depicts the percentage change of the normalized contextual-cueing effects (i.e., the percentage of the difference in RTs to new minus old displays related to RTs to new displays): 100 × (RT(new) − RT(old)) / RT(new)%, from the training to the test session. A simple t test revealed the reduction of the cueing effect (−7.36%) to be significant, t(29) = 2.11, p = .043. Together with the nonsignificant cueing effect in the test session (see above), this indicates that contextual cues extracted and learned from high-contrast displays (in the training session) could not be effectively transferred to low-contrast displays—that is, presenting search displays with low item-background contrast in daylight conditions impedes the expression of (acquired) contextual cueing.

By contrast, when presenting low-contrast displays under photopic conditions in the training session in Experiment 1B (see Fig. 3b), neither the main effect of context, F(1, 29) = 0.38, p = .54, ηp 2 = .01, BFincl = 0.21, nor the Context × Epoch interaction, F(4, 116) = 1.41, p = .23, ηp 2 = .05, BFincl = .088, turned out to be significant, with the Bayes factor favoring the null hypothesis of no contextual cueing during the training session (though RTs were 44 ms faster for old than for new displays). Only the main effect of epoch was significant, F(3.06, 88.65) = 22.02, p < .001, ηp 2 = .43, again reflecting significant procedural learning (552 ms shorter RTs in Epoch 5 compared with Epoch 1). Interestingly, however, while there was only numerical contextual facilitation in the (low-contrast) training session (44-ms effect, p = .54), a significant cueing effect emerged following the switch to high-contrast displays in the transfer session, t(29) = 2.84, p = .008, Cohen’s d = 0.52: There was an RT advantage of 280 ms for repeated versus nonrepeated displays. However, the change in the relative measure was only marginal (7.71% see Fig. 3c), t(29) = 1.87, p = .07, BF10 = 0.904. Given that a substantial contextual-cueing effect was already evident in the first block of the transfer session (228, 238, 258, 420, and 265 ms for Blocks 1 to 5, respectively), the invariant spatial context was likely acquired in (and transferred from) the training session, rather than reflecting a relearning effect developed in the transfer session. However, the low-contrast setting in the training session may have limited the expression of contextual cueing.


Taken together, for conditions of photopic lighting, the findings of Experiment 1A revealed successful contextual learning under the high-contrast condition, but this acquired contextual facilitation could not be transferred to the low-contrast condition by contrast, the findings of Experiment 1B suggest that repeated spatial arrangements could be successfully learned with low-contrast displays, but contextual facilitation was expressed only when the display contrast changed to high. The findings of Experiments 1A and 1B suggest that visual search displays encountered in daylight conditions at low item-to-background contrast impede contextual retrieval, but not contextual learning. Previous studies had shown that the visual span contracts with decreasing display contrast (Greene et al., 2013 Näsänen et al., 2001 Paulun et al., 2015). Accordingly, the impaired contextual retrieval observed here is likely attributable to the visual span being reduced under conditions of low display contrast. Given the likely extension of the visual span under conditions of mesopic-scotopic vision (Paulun et al., 2015), we went on to investigate the role of display contrast in mesopic vision in Experiment 2.

Information Theoretic Approaches to Image Quality Assessment

2.3 The Human Visual System Model

A HVS model is only required in the second of the two information fidelity-based QA methods that we will present in this chapter, that is, the VIF measure. The HVS model that we use is also described in the wavelet domain. Since HVS models are the dual of NSS models [ 16 ], many aspects of the HVS are already modeled in the NSS description, such as a scale-space-orientation channel decomposition, response exponent, and masking effect modeling [ 11 ]. The components that are missing include, among others, the optical point spread function (PSF), luminance masking, the contrast sensitivity function (CSF), and internal neural noise sources. Incidentally, it is the modeling of these components that is heavily dependent on viewing configuration, display calibration, and ambient lighting conditions.

In this chapter, we approach the HVS as a “distortion channel” that imposes limits on how much information could flow through it. Although one could model different components of the HVS using psychophysical data, the purpose of the HVS model in the information fidelity setup is to quantify the uncertainty that the HVS adds to the signal that flows through it. As a matter of analytical and computational simplicity, and more important to ease the dependency of the overall algorithm on viewing configuration information, we lump all sources of HVS uncertainty into one additive noise component that serves as a distortion baseline in comparison to which the distortion added by the distortion channel could be evaluated. We call this lumped HVS distortion visual noise, and model it as a stationary, zero mean, additive white Gaussian noise model in the wavelet domain. Thus, we model the HVS noise in the wavelet domain as stationary RFs N = < N → i : i ∈ I >, and N′= < N ′ = < N ′ i : i ∈ I >where N → i and N → ′ are zero-mean uncorrelated multivariate Gaussian with the same dimensionality as C → i :

where ε and F denote the visual signal at the output of the HVS model from the reference and the test images in one subband respectively, from which the brain extracts cognitive information ( Fig. 2 ). The RFs N and N′ are assumed to be independent of U, S, and V. We model the covariance of N and N′ as:

where σ 2 N, is an HVS model parameter (variance of the visual noise).



We developed a Bayesian algorithm that estimates illumination and surface reflectance from image data, for a restricted class of scenes. The scenes consisted of regular achromatic checkerboards. Thus each surface in the checkerboard was specified by its location and its scalar reflectance, ri,j, where i and j denote the row and column location of the square, respectively. In our experiments, we employed 5 × 5 checkerboards, so that i and j ranged between 1 and 5. The entire checkerboard of surfaces was described by the column vector whose entries are the ri,j in raster order.

We allowed the illumination to vary spatially, but for simplicity required that it be constant over each checkerboard surface. Thus the illumination was described by the luminance incident on each surface in the checkerboard, ei,j. This was summarized for the entire scene by the column vector .

The vector describing the scene, which we refer to as the world vector, was taken as the concatenation of the vectors and , so that = [ , ]. Although the scenes we considered were greatly simplified relative to those encountered in natural viewing, they embodied two key features. These were the fundamental illuminant-surface ambiguity that is characteristic of the problem of color constancy and the fact that both reflectance and illumination could vary across locations within a scene.

Given the visual world of achromatic checkerboard scenes, the sensory image was given by the reflected luminance li,j at each checkerboard location. This was described by a column vector . The reflected luminance at each location was taken as the product of the corresponding illuminant and reflectance: li,j = ei,j ri,j. The algorithm's task was to estimate from . This is clearly an underdetermined problem, since has twice as many entries as . To formulate constraints on the solution and develop an algorithm to find from , we employed Bayesian decision theory (Berger, 1985 Lee, 1989).

There are three key ingredients required to develop a Bayesian algorithm. The first is the likelihood. This expresses the relationship between the representation of the visual world ( ) and the observed data ( ) as a probability distribution P( | ). The likelihood characterizes the probability with which a set of luminance values would be observed if the world actually contained the surfaces and illuminants described by . For computer vision applications, we can think of the likelihood as a probabilistic way to describe the imaging process. In general, calculation of the likelihood involves incorporation of processes that perturb or add noise to an incident signal (e.g., optics of the eye and photon noise). Here, however, we assumed that the encoded luminance was noise-free, so that:

where d is a constant. Using a noise-free likelihood means that algorithm performance is governed by how the prior, described in following text, resolves the ambiguity about reflectance introduced by uncertainty about the illumination.

The second ingredient for a Bayesian algorithm is the prior. This captures statistical regularities of the visual world as a probability distribution P( ). We chose a prior that expressed several assumptions about the visual world. First, the surfaces in a scene are drawn independently from the illumination. Thus, P( ) = P([ , ]) = P( )P( ).

Second, we assumed that the surface reflectances within a checkerboard were independently and identically distributed, so that P(  ) = Πi, jP(ri, j). We took the reflectance distribution at each image location to be a beta distribution

The beta is defined over the range 0 to 1. The relative probability of surfaces of different reflectance is adjusted by the parameters αsurface and βsurface.

Third, we assumed that the illuminant varied more slowly across the array than the surface reflectances. This idea, which seems intuitively reasonable, has been used in previous surface/illuminant estimation algorithms that allowed the illuminant to vary across spatial locations (Funt & Drew, 1988 Land & McCann, 1971). To capture this in the prior distribution, we took the illuminant prior to be a multivariate lognormal

The lognormal is defined over positive values and has a long positive tail. This allows the prior to account for a wide range of illuminant intensities. The mean illuminant intensity is determined by the parameter vector , which provided the mean value at each location. We chose a spatially uniform mean, so that each entry of was given by a single parameter μillum. The lognormal also has a covariance matrix Killum, which allowed us to specify that illuminant intensities at neighboring locations are correlated. Such specification captures the assumption that the illuminant varies slowly over space. How slowly the illuminant varies is determined by the exact structure of the covariance matrix. Indeed, Killum was constructed to represent a first-order Markov field, so that the correlational structure was controlled by a single parameter ρillum. Let the variance of the illuminant intensity at each location be the same and be given by . Then the covariance between illuminant intensities at locations [i, j] and [k, l] was given by .

The likelihood and prior were combined using Bayes' rule to calculate the posterior:

where c is a normalizing constant. The posterior combines the likelihood and the prior and describes the probability of any visual world given the observed luminance values.

The third ingredient for a Bayesian algorithm is to specify a rule for choosing an actual estimate from the posterior. Here we chose the that maximized the posterior. To find this for a set of luminances and a set of prior parameters [αsurface, βsurface, μillum, , ρillum] we used numerical search as implemented by the fmincon function of MATLAB (Mathworks, Natick, MA). Because we assumed a noise-free likelihood, it was sufficient to search only over the space of illuminant vectors , since each choice of allowed computation of the that was consistent with it and the observed luminances . Thus our parameter search was over a 25-dimensional space. We bounded the searched illuminant intensities to lie between 0.001 and 30.

It was also critical to start the search with reasonable initial guesses as to the estimates. To produce a set of such guesses, we took 2,000 draws from the prior distribution, and found a set of n-dimensional linear models for the space of illuminants (where n took values of [2, 4, 6, 9, 10, 12, 14]). We searched over illuminants within each of these linear models in order of increasing dimension, using the result of the preceding search as the initial guess for the next. The estimate of that resulted in the highest posterior from this preliminary optimization was used as the initial guess for the full dimensional problem. For a subset of conditions, we investigated the sensitivity of our search procedures to the initial guess. With some guesses, the fmincon search simply returned the initial guess. We detected and rejected these cases. For the other initial guesses, the returned solution was independent of the initial guess. This check provides some assurance that the returned solutions approximate global maxima of the posterior, although we cannot know this with certainty.

For a subset of conditions, we also verified that searching across did not yield different solutions than searching across .

In summary then, for a given set of parameters [αsurface, βsurface, μillum, , ρillum] and a set of luminance values , our algorithm estimates the reflectance and illuminant values that are most likely. That is, our estimate is the that maximizes P( | ).


The methods used to collect the psychophysical data, as well as the data themselves, are described in detail in Allred et al. (2012) and summarized here. Briefly, seven observers looked through an aperture into a rectangular enclosure, at the end of which they viewed an achromatic 25-square checkerboard presented on a custom-built high-dynamic range display (see Radonjić, Allred, Gilchrist, & Brainard, 2011 for display specifications). Observers were asked to judge the lightness of the center square (test patch) by matching it to one of a series of Munsell papers that ranged from 2.0 (black) to 9.5 (white) in 0.5-unit steps.

The test patch (center square) took on 24 distinct luminance values, ranging from 0.096 cd/m 2 to 211 cd/m 2 . The smallest value was the minimum luminance value of the high-dynamic range display and should be considered approximate. The remainder of the test patches were chosen in equal log steps between 0.24 cd/m 2 and the maximum luminance of the display 211 cd/m 2 . The patches had CIE xy chromaticity (0.43, 0.40). The same 24 test patches were judged within nine separate checkerboard contexts ( Figure 1 ).

Illustration of the nine experimental checkerboard contexts. Average luminance of inner ring and outer ring were divided into low, standard, and high conditions. The central test patch has the same luminance in all nine checkerboard contexts shown here.

A standard checkerboard context was created by taking 24 luminance values between 0.11 and 211 cd/m 2 (contrast ratio 1,878:1) that were equidistant in logarithmic units. These 24 luminance values were assigned to a 5 × 5 checkerboard surrounding the center test square. To assign luminance values to squares, we took random draws of spatial arrangement until neither the brightest nor the darkest luminance were in the inner ring immediately adjacent to the center square. This arrangement was used as the standard context in all experiments a representation of this standard checkerboard context is shown in Figure 1 . The remaining eight test checkerboard contexts were created in the following fashion. We divided the 24 checkerboard squares into inner (eight locations immediately adjacent to the center test square) and outer rings (16 locations surrounding the inner ring). We created low, standard, and high luminance distributions for inner and outer rings (for details, see Allred et al., 2012). Then we assigned each possible permutation of these rings to the eight test checkerboard contexts (i.e., low inner–low outer checkerboard low inner–standard outer checkerboard low inner–high outer checkerboard, etc.). The spatial arrangement of the low and high inner and outer rings in each test checkerboard context preserved the rank order of luminance values in the standard checkerboard context.

Note that the test checkerboard contexts were not constructed to simulate a fixed set of papers under different illuminants that is, neither inner nor outer ring manipulations were implemented as multiplicative factors of the corresponding luminance values for the standard checkerboard context. Thus, it is not straightforward to interpret the psychophysical data in terms of the degree of constancy they reveal. Rather than asking about constancy per se, we ask whether a model derived from an algorithm designed to achieve constancy can predict the observed psychophysical data.

To proceed, we averaged the luminance values matched to each Munsell paper the data aggregated thus give, for each Munsell paper, a set of nine luminance values (one for each test checkerboard context) that are perceptually equivalent. By plotting the luminance values for each of the eight test checkerboard contexts against the luminance values for the standard checkerboard context we establish eight context transfer functions (CTFs) that characterize the effect of changing context from the standard checkerboard context to the each of the eight test checkerboard contexts. It is these CTFs in particular that we seek to model.

Using the algorithm to model psychophysical lightness judgments

We applied the Bayesian algorithm to the stimuli used in the psychophysical experiments. For any set of algorithm parameters (priors), we obtained estimates of the illuminant and surface reflectance at each checkerboard location from a specification of the luminance in that checkerboard context. In our previous report (Allred et al., 2012) and in the methods summary above, luminance values are reported in units of candelas per square meter for the calculations, luminance was specified in normalized units whose range was 0 to 1, with 1 equivalent to the maximum luminance displayed in the experiment.

To compare the algorithm's performance to the psychophysical data, we need to specify a linking hypothesis that connects the algorithm's output to the experimental measurements (see Brainard, Kraft, & Longére, 2003 Teller, 1984). To do so, we assumed that when the Bayesian algorithm estimated that two luminance values in different contexts [La(Context x), Lb(Context y)] had the same reflectance (Rz), then these two test luminance values would match in lightness across the context change. This linking hypothesis is based on the general idea that perceived lightness is a perceptual correlate of surface reflectance, but takes into account the fact that reflectance is not explicitly available in the retinal image. The role of the algorithm in the model is to provide a computation that converts proximal luminance to a form that is more plausibly related to perceived lightness.

Given the linking hypothesis above, we computed CTFs for the algorithm that could be compared to the psychophysical CTFs. Indeed, computation of algorithm-based CTFs proceeded in a fashion similar to that used to generate the psychophysical CTFs. The one key difference is that rather than using the matched Munsell papers to establish equivalence across contexts, we used the estimates of surface reflectance returned by the algorithm. Thus the particulars of the computation differed slightly.

First, as described in Methods, we computed algorithm estimates of each of the 216 test𠄼heckerboard luminance combinations viewed by human observers (24 test patches embedded in each of nine checkerboard contexts). Although we computed both illuminant and surface reflectance estimates for all 25 checkerboard locations in each case, the key value that we extracted to compute the CTFs was the estimated surface reflectance at the test location (central test patch). Then, for each context, we fit estimated test patch reflectance as a function of test luminance with a third-order polynomial. This allowed us to interpolate between the discrete estimated reflectance values. The polynomial functional form was chosen for convenience and has no theoretical significance. Let Restimated = fx(Li) represent the interpolated reflectance values, where x represents one of the nine checkerboard contexts and i indicates the 24 test patch values. In the standard context (x = St), we evaluated this function for all Lt to obtain a set of reflectance values [R]St that served as the referents for establishing CTFs (much as the Munsell papers did for the psychophysical judgments). To compute a CTFx, we inverted the interpolated function fx to find the value L that yielded each [R]St. Thus, each algorithm-based CTF consists of 24 [LSt, Lx] pairs that were taken as perceptually equivalent.

The five parameters αsurface, βsurface, μillum, , and ρillum control the prior probability and hence drive the algorithm estimates. The parameter values we used for the algorithm were chosen to minimize the average error between algorithm-based CTFs and psychophysical CTFs. To find these values, we used a grid search on the algorithm parameters. We computed algorithm estimates for the 216 test𠄼heckerboard pairs described above for thousands of sets of parameter values. Initial parameters were chosen through visual inspection of model predictions for a variety of simulated scenes. From these initial values, we varied each parameter in coarse steps to determine the best region of parameter space and then sampled this space more finely. Since our grid search was not exhaustive, it remains possible that a different set of parameter values could fit the data better.

For each set of parameters, we calculated algorithm-based CTFs via the method described above. Algorithm-based CTFs were constructed from 24 [LSt, Lx] pairs while the psychophysical CTFs were constructed using the 16 [LSt, Lx] defined by the Munsell chips. To directly compare the two sets of CTFs, we interpolated the algorithm-based CTFs to obtain values for each of the 16 psychophysical Lst values. We chose final algorithm parameters that minimized the average prediction error in a least-squares sense. We refer to these as the derived priors to emphasize that they were obtained by a fit to the psychophysical data, rather than directly from measurements of naturally occurring illuminants and surfaces.

Chapter 8- Physiology and Psychophysics

-In 1795, astronomer Nevil Maskelyne and his assistant David Kinnebrook were setting ships' clocks according to when a particular star crossed a hairline in a telescope. -Maskelyne noticed that Kinnebrook's observations were about a half-second slower than his.
-Kinnebrook was warned of his "error" and attempted to correct it but the discrepancy between his observations and Maskelyne's increased to 8/10ths of a second, and Kinnebrook was relieved of his duty.

-Twenty years later, the German astronomer Friedrich Bessel (1784-1846), speculated that the error had not been due to incompetence but to individual differences among observers.
-whencompare his observations with his colleagues found systematic differences among them in the first reaction-time study
-was used to correct differences among observers by calculating personal equations---->For example, if 8/10ths of a second was added to Kinnebrook's reaction time, his observations could be equated with Maskelyne's.

-physiologists methodologies were used they provided the link between the questions of philosophy and the future science of psychology.

-Galileo's and Locke's mismatch between physical events and the perceptions of those events was widespread/ distinction between primary and secondary qualities,

-Newton (1704/1952) had observed that the experience of white light is really a composite of all colors of the spectrum, although the individual colors themselves are not perceived.

2) The second was Hartley's view that nerves were the means by which "vibrations" were conducted from the sense receptors to the brain and from the brain to the muscles.

-As far back as Eristratus of Alexandria (ca. 300 B.C) had the idea that there are sensory and motor nerves was also reinforced by Galen's study of gladiators and soldiers in the second century A.D.
-Bell, however provided experimental evidence.

-In 1811 the great British physiologist Charles Bell's research on the anatomical and functional discreteness of sensory and motor nerves.
-using rabbits demonstrated that:
---> sensory nerves enter the posterior (dorsal) roots of the spinal cord and the motor nerves emerge from the anterior (ventral) roots.
-Bell circulated his findings only among his friends. explains why the prominent French physiologist François Magendie (1783-1855) published similar results 11 years later without being aware.
- after debate of who discovered it first they settled on the Bell-Magendie law whichc became "law of forward direction" governed the nervous system.
--->Sensory nerves carried impulses forward from the sense receptors to the brain, and motor nerves carried impulses forward from the brain to the muscles and glands.

-nerves contain their own specific energy, but not all sense organs are equally sensitive to the same type of stimulation.
-Rather, each of the types of sense organs is maximally sensitive to a certain type of stimulation= "specific irritability,"/ adequate stimulation.

-->According to Kant: sensory information is transformed by the innate categories of thought before it is experienced consciously
-->For Müller: the nervous system is the intermediary between physical objects and consciousness.

--->The vitalists maintained that life could not be explained by the interactions of physical and chemical processes alone. life was more than a physical process and could not be reduced to such a process. Furthermore, because it was not physical, the "life force" was forever beyond the scope of scientific analysis. Müller was a vitalist.

-->Conversely, Hemholtz, a materialist, saw nothing mysterious about life and assumed that it could be explained in terms of physical and chemical processes. Therefore, there was no reason to exclude the study of life or of anything else from the realm of science. believed that the same laws apply to living and nonliving things, as well as to mental and nonmental events.

signed an oath (some say in their own blood):
those who also signed:
- Du Bois-Reymond
-Karl Ludwig
-Ernst Brucke

-What this group accepted when they rejected vitalism were the beliefs that living organisms, including humans, were complex machines (mechanism) and that these machines consist of nothing but material substances (materialism).
-The mechanistic-materialistic philosophy embraced by these individuals profoundly influenced physiology, medicine, and psychology.

-excluded nothing from the realm of science
-measured rate of conjunction
-isolated the nerve fiber leading to a frog's leg muscle and stimulated it at various distances from the muscle to note time took for muscle to respond.
-muscular response followed more quickly when the motor nerve was stimulated closer to muscle than farther away -By subtracting one reaction time from the other, he concluded that the nerve impulse travels at a rate of about 90 feet per second (27.4 meters per second).
-for humans, asked subjects to respond by pushing a button when they felt their leg being stimulated.
-reaction time was slower when the toe was stimulated than when the thigh was stimulated he concluded, again by subtraction, that the rate of nerve conduction in humans was between 165 and 330 feet per second (50.3-100.6 meters per second)

In 1672 Newton had shown:
-white sunlight passed through a prism can be seen as band of colored lights
-The prism separated the various wavelengths that together were experienced as white.
-Early speculation was that a different wavelength corresponded to each color
-Newton discovered by mixing various wavelengths, became clear that the property of color was not in the wavelengths themselves but in the observer.
-->For example, white is experienced either if all wavelengths of the spectrum are present or if wavelengths corresponding to the colors red and blue-green are combined.

-motivation: how to account for the lack of correspondence between the physical stimuli present and the sensations they cause?

To account for these phenomena, theorized that there are three types of receptors on the retina but that each could respond in two ways*:

1) One type of receptor responds to red-green,
2) one type to yellow-blue,
3) one type to black-white.

-own theory of color vision, improved upon those of Helmholtz and Hering not in opposition (evolutionary p.o.v)

attempted to explain in evolutionary terms the origins of the anatomy of the eye and its visual abilities
-She noted that some animals are color blind and assumed that achromatic vision appeared first in evolution and color vision came later.
-She assumed further that the human eye carries vestiges of its earlier evolutionary development.
-the most highly evolved part of the eye is the fovea, where, at least in daylight, visual acuity and color sensitivity are greatest.
-Moving from the fovea to the periphery of the retina, acuity is reduced and the ability to distinguish colors is lost. -However, in the periphery of the retina, night vision and movement perception are better than in the fovea.
-peripheral vision (provided by the rods of the retina) was more primitive than foveal vision (provided by the cones of the retina) because night vision and movement detection are crucial for survival.
-But if color vision evolved later than achromatic vision, was it not possible that color vision itself evolved in progressive stages?

-Phrenology became enormously popular and was embraced by some of the leading intellectuals in Europe (such as Bain and Comte).
-One reason for the popularity of phrenology was Gall's considerable reputation.
-Another was that phrenology provided hope for an objective, materialistic analysis of the mind, man himself could be studied scientifically,

-Broca was not the first to suggest that clinical observations be made and then to use autopsy examinations to locate a brain area responsible for a disorder.
-Jean-Baptiste Bouillaud (1796-1881) had done so as early as 1825. Using the clinical method on a large number of cases, Bouillaud reached essentially the same conclusion concerning the localization of a speech area on the cortex that Broca was to reach later using the same technique.
-Why, do we credit Broca with providing the first credible evidence for cortical localization and not Bouillaud? because Bouillaud had been closely associated with phrenology "The scientific community [was] overly cautious about anything or anyone associated in any way with Gall or phrenology"

-Phineas Gage, railroad construction, explosion, iron tamping rod through his skull-->entered just below his left eye and exited through the top of his head.
-survive + fully recovered physically but changed, personality.
-Modern work based on Gage's skull and Harlow's observations linked the ablated areas of the brain with corresponding expected behavioral changes.

-subsequent research confirmed Broca's observation that a portion of the left cortical hemisphere is implicated in speech articulation or production, named Broca's area.

-In 1874, just over a decade after Broca's discovery, the German neurologist Carl Wernicke (1848-1905) discovered a cortical area, near Broca's area, responsible for speech comprehension. named Wernicke's area.

-Broca's localizing of a function on the cortex supported the phrenologists and damaged Flourens's contention that the cortex acted as a unit
-Broca did not find the speech area to be where the phrenologists had said it would be.

-engaged in craniometry (the measurement of the skull and its characteristics) in order to determine the relationship between brain size and intelligence. He began his research with a strong conviction that there was such a relationship, and (not surprisingly) he found evidence for it. In 1861 Broca summarized his findings:

-the brain is larger in mature adults than in the elderly, in men than in women, in eminent men than in men of mediocre talent, in superior races than in inferior races.

-Broca was aware of several facts that contradicted his theory: There existed an abundance of large-brained criminals, highly intelligent women, and small-brained people of eminence and Asians, despite their smaller average brain size, were generally more intelligent than ethnic groups with larger brains.

Experiment 2: Effects of set size on performance in a conjunction-search paradigm

With Experiment 1, we first replicated the deficiency window suggested by Hunter et al. (2016) in a visual-search task with a larger set size than used before. However, while expecting no changes in search time with increasing set size, we observed set size effects that were contrast dependent in the lower end of our contrast spectra. Our visual-search task results and array properties (Experiment 1) mimicked both parallel and serial-search behaviors, but not in the traditional sense (those completed in photopic luminance spectra). We then asked which effects would be observed in search paradigms known to require serial search induced by including two search criteria. An answer to this question could provide greater inference to the prospective involvement of either the magnocellular and parvocellular pathways or both.

Visual-search tasks known to require serial search (one stimulus at a time) are normally observed in conjunction-search tasks. These search tasks require to search stimuli based on two or more criteria (Treisman & Gelade, 1980). Increases in set size typically lead to increased reaction times search rates in serial search (time allocated to each individual stimulus), however, do not change. The nature of the two search processes (parallel and serial) have been actively used to infer magno/parvocellular pathway activation, respectively.

Due to its fast temporal integration capacity, the magnocellular pathway has been implicated more as a critical contributor in parallel-search patterns versus serial search, as parallel search favors attention capture related to a faster temporal response (Kandel & Wurtz, 2000: Stein 2014, Skottun & Skoyles, 2008) or, when the search task is varied using luminant versus isoluminant stimuli. Whereas the parvocellular pathway has been demonstrated to contribute more in serial-search conditions relative to that of the contributions of the magnocellular pathway. Despite original presentation of this as a functional dichotomy, it has recently been argued that any distinct discrimination between the visual streams processing occurs at tertiary stages of the rod–cone magnocellular–parvocellular visual processing streams and are not necessarily unimodal. For example, the PC’s response characteristics allow detection of luminance contrast changes in varying achromatic conditions, and, this contrast detection is not necessarily restricted to magnocellular processing as was previously thought. This PC effect may be a function of its linear contrast response function as luminance increases (Kaplan & Shapley, 1986). However, it is difficult to separate the two pathways in mesopic light because of their approximately linear contrast response functions.

However, of key consideration in most studies that have assessed visual search is that they are completed in photopic luminance ranges, which innately favor parvocellular engagement. In the photopic luminance range, it may be hard to isolate the magnocellular pathway in a visual-search paradigm, even in high temporal frequencies thought to favor magnocellular processing (Cheng, Eysel, & Vidyasagar, 2004) or known psychophysical conditions designed to isolate gain functions mediated by either’s contrast gain properties (Pokorny, 2011).

Considering, contrary to the typical finding of no set-size effects, that, for feature search, we observed set-size effects in our first experiment, the addition of a simple search criterion, targeting known serial search patterns, can further our initial assessment of increased complexity from a simple discrimination task. Additionally, assessing serial search may help elucidate the magnocellular role in visual search, in a low luminance paradigm that favors the low contrast response functions of magnocellular stimuli relative to that of the parvocellular response function (Kaplan & Shapley, 1986).

We hypothesized that independent of the contrast properties, we would observe set-size effects (increased RTs with increased set size) consistent with the visual search literature. Additionally, as observed in the 2 × 2 search array in Experiment 1, we expected to observe a similar regression-analysis pattern yielding a proportionally equal contribution of CW and CM to the prediction of behavioral performance in the 2 × 2 search array, while for a larger 4 × 4 array the influence of CM should get relatively stronger.



Participants (five male, four female, ages 17–20, n = 9) were recruited from the local university student population. All participants were provided with compensation of course credits or monetary remuneration. All participants had normal or corrected-to-normal vision. The experiment was conducted in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki. All participants agreed to the participation terms and signed a consent form.

Visual-search task

We created the conjunction-search paradigm with an additional simple characteristic change that was deemed to carry not much additional demand on a higher cognitive level but to be sufficient to produce the complexity needed to elicit results that are consistent with conjunction-search literature (i.e.. increased set size decreases performance—accuracy and reaction time. Like in Experiment 1, participants were given instructions prior to commencement of the study, which doubled as the dark adaptation period similar to Hunter et al. (2016). We used two different set sizes, 2 × 2 and 4 × 4, for our search arrays, like in the previous experiment. The search incorporated square stimuli of the same physical dimensions and luminance properties as the targets and distractors. Thus, both shape and contrast had to be judged to discriminate the target from distractors. The assignment of the luminance and contrast values were held consistent with Experiment 1 however, the B9 and B13 values were eliminated to use fewer levels of contrast, as the results demonstrated no significant differences between the B5, B9, and B13 values in Experiment 1 (see Fig. 2 see Fig. 4 for graphical representation of the stimuli and temporal time line used in the experiment).

Graphic representation of the temporal timeline of the complex-conjunction search. Each trial begins with a fixation cross, prestimulus interval 800–1200, and then the array presentation. Upon key press, the screen clears and moves to next trial. The upper right-hand corner shows the physical characteristic of the new search stimuli in both set sizes and target presence. It is important to note that the actual arrays were not as discriminable these arrays are presented for visual clarification of the visual characteristics of the paradigm

The experiment was split into two blocks per set size of the array. All potential combinations of base luminance (B2, B3, and B5) and luminance contrast (low, medium, and high) were included, balanced (N = 14) and presented in a random order. Each block contained 126 trials and was preceded by 10 practice trials. Participants received a 3-minute rest period between blocks. Participants performance was recorded as a function of RT.

Data analysis

All experimental conditions were analyzed for RT (ms) similar to that of Experiment 1. The only difference between the analysis was that the base variable had two within levels removed (B9, B13), as it was determined that values above the B5 luminance threshold provided no additional relevant information, and luminance should be kept as low as possible.


Reaction times (RTs)

A 3 base (B2, B3, B5) × 3 contrast (low, medium, high) × 2 set (4, 16) repeated-measures ANOVA demonstrated significant main effects for all factors: base, F(2, 14) = 11.45, p < .005, η 2 = 0.62 contrast, F(2, 14) = 14.32, p < .005, η 2 = 0.67 set, F(1, 8 ) = 70.32, p < .005, η 2 = 0.91. A Contrast × Base interaction, F(4, 28) = 2.63, p < .05, η 2 = 0.27, was observed, demonstrating significantly slower RTs in the B2 and B3 conditions relative to the B5 only during the low contrast condition: Low B2 > B3, t(10) = 0.48, p = .61 B2 > B5, t(10) = 4.64, p < .01 B3 > B5, t(10) = 3.48, p < .05, all other t-test comparisons were not significant) (Fig. 5).

Significant interaction observed for RT during a conjunction search paradigm. Base × Contrast interaction, where the low B2 and low B3 are significantly slower relative to all other conditions, replicating the previously defined deficiency window

Performance-based efficiency functions

Stepwise linear regression analysis, with RT as the dependent variable and the nine pairs of log-transformed contrast coefficients log10CM and log10CW (3 [base values: B2, B3, B5] × 3 [contrasts: low, medium, high]) as regressors revealed a highly significant linear relationship with nearly equal contributions of both contrast parameters (−272.2 vs. −223.5, for log10CM and log10CW, respectively, r 2 adj = .857, p ≤ .001 cf. Table 2 for detailed statistics). The equation was then reassessed for each of the two set sizes that also yielded two significant linear equations (set size = 4: r 2 adj = .77, p = .005 set size = 16: r 2 adj = .81, p ≤ .001). Again, as illustrated in Table 2, for both set sizes, both contrast parameters contributed nearly equally to performance (set size 4: −.626 vs. −.667, for log10CM and log10CW, respectively set size 16: −.675 vs. −.672 cf. Table 2 and Fig. 6).

Surface plot of reaction times (ms) as a function of both the Michelson and Weber contrast variables for targeted conjunction search paradigm. a Set size of 4. b Set size of 6. Axis: x = log10CM, y = log10CW, z = RT (surface). (Color figure online)


With Experiment, 2 we again replicated the deficiency windows for RT, where participants’ decline in performance was not sensitive to a decrease in only one of the two luminance properties but were found to be the combination of when values fall below 0.06 cd/m 2 (B2 and B3) coinciding with a low luminance contrast ratio (<1.7). Consistent with our hypothesis and the nature of the targeted conjunction search paradigms, the participants’ performance was affected by the increased number of search stimuli, culminating in main effects of slower RT with an increase in set size.

Performance-based efficiency functions revealed approximately equal contribution of each contrast measure, suggesting equal contribution of both magnocellular and parvocellular streams as inferred from Experiment 1. In the next section, we discuss this issue by comparing both experiments.

Cumulative effects of task complexity: Increased number of stimuli versus increased criterion

Experiments 1 and 2 examined incremental increases in complexity, as defined by the increased number of stimuli (feature search relative to the target discrimination observed in Hunter et al., 2016) and by the type of search required (feature search vs. conjunction search), respectively. As we were interested in the direct contrast between the two different search paradigms, we compared the data of both experiments’ RT efficiency functions, focusing on (1) the magnitude of difference and slope trends between the search conditions for each respective set size, and (2) the proportional similarity in contribution of each of the respective contrast coefficients to the previously observed contrast functions (see Tables 1 and 2).

For the latter, we calculated an integral for each individual contrast property within each of the four different search conditions based on the previously reported regression equations (see Table 1: Feature 4, Feature 16 Table 2: Conjunction 4 and Conjunction 16) and plotted their respective values against each other. The integral calculations isolated each contrast property by adding a zero multiplier of the opposing contrast property, then calculating the integral for the minimum and maximum values used by each of the respective coefficient variables in the experiments (see Equations 2.a and 2.b).

Michelson contrast equation slope, c = constant, values located in Tables 1 and 2, for each respective search condition. Min and Max log10CM = −0.8 and −0.3, respectively.

Weber contrast equation slope, c = constant, values located in Tables 1 and 2, for each respective search condition. Min and Max log10CW = 0.6 − 1.6.

Figure 7 summarizes the results from Experiments 1 and 2 with respect to the contributions of CM and CW to RT in the performance-efficiency functions. Surface planes for the four conditions (Feature 4 and 16 from Experiment 1 and Conjunction 4 and 16 from Experiment 2 Fig. 7a, b, d, and e, respectively) reveal the proportional contribution of each contrast parameter to RT. Three conditions (Feature 4, Conjunction 4, and Conjunction 16) reveal a similar gradient with increasing RT from bottom left to upper right (i.e., a similar linear increase of RT with either of the contrast parameters). A different pattern can be seen for the Feature 16 condition, where the slope along the x-axis (i.e., for the Michelson’s contrast) is much steeper than for the Weber’s fraction (y-axis), indicating a stronger influence of change in CM compared with CW on changes in RT. This impression is supported by the difference plots (Fig. 7c, f–h). The contrast integrals plot (Fig. 7i) also demonstrates the contribution bias of the Michelson contrast for the Feature 16 search condition, while all other conditions are approximately completely linear in their plotted values. It can be easily seen that a shift from feature to conjunction search within the same array or from small to larger set size in conjunction search does not change the equal contribution of CW and CM to RT, while feature search with larger set size leads to much stronger weight of the Michelson’s contrast.

Colored plots in a–b and d–e show the juxtaposition of feature-search and conjunction-search reaction-time (ms) surface plots as a function of both the log10CM and log10CW variables (from Experiments 1 and 2, Figs. 3 and 6). Grayscale plots in c and f–h show difference plots between different set sizes for feature (c) and conjunction (f) search and between different search paradigms for small (4, g) and larger (16, h) set size. The colors from red to blue in the single surface plots represents slowest to fastest RT, respectively, while in the difference plots the colors from black to white represent the smallest to largest absolute differences in RT for each respective contrast. i Integral change for each respective luminance contrast value based on Equations 2a and 2b for the Michelson (x-axis) and Weber Contrast (y-axis) values, respectively. (Color figure online)


For the HMD psychophysical experiment, the HMDs need to be characterized for predicting the luminance of the stimulus image. The Oculus Rift development kit 2 (DK2) was used to generate the stimuli. The panel resolution was 1920 × 1080, the resolution for each eye was 960 × 1080, and the viewing angle was approximately 100° for each eye. In this section, we describe the characteristics of the HMD used in the experiment and present the proposed characterization model. This characterization model was used to calculate the luminance of the stimulus images.

2.1 HMD measurements

It is difficult to measure the colors displayed on an HMD accurately by using conventional measurement methods for flat panel displays. This is because human eyes observe virtual images by using an optical structure, which consists of a near-eye display (NED) and virtual optics. 11 Previous studies on HMD measurement methods have considered the eye box 11-13 and the volume behind the lens. 13 However, commercially available NED measurement systems have only been developed recently, and these systems are expensive. As an alternative, in this study, a spectroradiometer CS-2000 placed at a distance of approximately 1.5 m from the HMD was used. This setup was capable of generating stable measurements. The measured values can be different from the luminance measured using a NED measurement system because it does not consider the nonuniformity issue in HMDs, which shows high luminance on the center and low luminance on the periphery, such as lens shading. Although the exact measured values can be different from those using an NED measurement system, our measured data can show the relative luminance differences between the test stimuli.

To characterize the HMD, 86 patches, including RGBCMYW and random colors, were measured using the CS-2000 spectroradiometer, in a dark room. The RGBCMYW colors represent the primary, secondary, and neutral colors at eight different levels of luminance. The random colors represent the colors selected in random (R, G, B) combinations. Thus, a total of 56 RGBCMYW colors and 30 random colors were utilized. The measuring area of the spectroradiometer was in parallel with the center of the patch, with a measuring angle of 0.1°. The size of the patch on the HMD was set to 9% of the size of the full screen at the center of a black background. The size of the patch was 30% of each width and height of the stimulus.

The HMD is equipped with an organic light-emitting diode (OLED) panel, which uses a luminance control technique based on average pixel level (APL). The effect of APL was analyzed by comparing color patches with sizes of 9% and 100% of the full screen size, and the background color of the 9% size was set to black. The luminance difference of the color white was only 2.9 cd/m 2 (3.1%), and there was no visually noticeable change in the brightness. The color difference () between the 9% size and the 100% size was 1.3 for white and 0.1, 0.3, and 0.7 for red, green, and blue, respectively. Thus, the APL was not considered in this characterization. To identify the color difference between the left lens and the right lens, the colors on both lenses were measured using 86 patches. The average color difference between the left lens and the right lens was 1.28 in terms of . To characterize the HMD, the color on the right lens was used as a reference.

As shown in Figure 1A, the HMD has a wide color gamut, for example, the P3 color space, and the maximum luminance is approximately 94 cd/m 2 . The correlated color temperature of the HMD white point is 7174 K, which is higher than that of the D65 white point. Table 1 shows the CIEXYZ values and the chromaticity coordinates (x, y) for each maximum of red, green, blue, and white colors. As shown in Table 1, the luminance of the sum of the RGB is 7% higher than that of the white patch, which implies an unsatisfactory additivity performance. Figure 1B shows the tone-curve characteristics of white color and the sum of the RGB depending on the digital RGB values (dRGB in the x-axis). The optimized gamma values are 2.43, 2.25, 2.33, and 2.33 for the red, green, blue, and white channels, respectively.


Pseudoisochromatic stimuli are simple, interesting patterns created in laboratory and used to simulate objects observed in a natural setting. They are composed by patches with variable color and luminance forming a target embedded in a field that differ from each other only by their color—the spatial and luminance noise assure that color discrimination is essential for detection and identification of the target.

Few investigations have focused on the influence of stimulus parameters of pseudoisochromatic patterns on the color visual performance, such as the range of luminance noise, spatial noise and number of spatial patches (Regan et al., 1994 Souza et al., 2014). Luminance noise has an important influence on subjects' performance to discriminate the target of pseudoisochromatic patterns and some studies have reported with details how the luminance noise of pseudoisochromatic stimuli was modulated in their experimental paradigms (Regan et al., 1994 Goulart et al., 2013 Souza et al., 2014). In the work of Regan et al. (1994), the luminance of any given patch varied from trial to trial and was randomly assigned to one of six equally spaced and equally probable levels in the range 7.6�.0 cd/m 2 . In the case of Goulart et al. (2013), the stimulus arrangement, occupying the entire screen, was composed of circles of different sizes at six levels of luminance that randomly varied between 7 and 15 cd/m 2 and, finally for Souza et al. (2014), the minimum and maximum luminance values of the luminance noise were 8 and 18 cd/m 2 , respectively, with Weber's contrast of the noise about 53�%. In all cases, they use a single value of mean luminance. The thresholds at 22 and 25 cd/m 2 (or at least 25 cd/m 2 ) obtained in this study are lower than thresholds in the Regan et al. (1994), Goulart et al. (2013), and Souza et al. (2014) studies whose highest mean luminance was at 17, 15 and 18 cd/m 2 , respectively. Since the Weber's contrast in the three named studies was 0.53𠄰.55, i.e., lower than 80% in the CCP here, in line with the present main finding that the lower the contrast, the lower the thresholds. In these former studies the reported thresholds are expected to be lower than in the CCP, but comparable to the CDP at the mean luminance 16 cd/m 2 (contrast of 0.61) or 19 cd/m 2 (contrast of 0.54).

This was the first study to focus on the influence of the way how luminance modulation was applied to the luminance noise on the discrimination of pseudoisochromatic stimuli. Two different forms of luminance noise modulations were studied, either keeping constant the absolute difference between the maximum and minimum luminance and allowing the Weber's contrast to change (CDP) or keeping the Weber's contrast constant and allowing the absolute difference between the maximum and minimum luminance of the noise to change (CCP).

The results indicated that subjects' response to pseudoisochromatic stimuli depended not only of the chromatic information present in the pattern but also of the protocol that was used to create the luminance noise. Switkes et al. (1988) investigated the effects of luminance masking on color test reduced the color contrast threshold detection if the luminance contrast of the masking is lower, but the masking at high luminance contrast increased the threshold of color contrast detection. They indicated that the reduction of the color contrast detection only happen when the luminance masking had been about 32 times its threshold. They suggested that a mechanism that would produce these effects, would be a direct and attenuated input from luminance system on the chromatic system, and that the attenuation would reflect the lower contrast sensitivity of the color detection mechanism (P pathway) compared to the mechanism that process the luminance contrast threshold.

It was previously observed that at low mean luminance levels, color discrimination became poorer than at high levels (MacAdam, 1942 Brown, 1951 Wyszecki and Fielder, 1971). These previous studies used a larger range of mean luminance values than was the case of the current work. So, in the present study, we found for the CDP a decrease of the color discrimination as a function of the mean luminance in a very small range of luminance, significantly larger at the two lowest mean luminances of the noise, 10 cd/m 2 and 13 cd/m 2 , compared to the highest one, 25 cd/m 2 . Nevertheless, for CCP noise protocol we found no significant difference between the values obtained within the mean luminance range that we studied.

We used a range of mean luminance inside the photopic range and the maximum and minimum values differ from each other by slightly more than one log unit. Although in the literature, there is no definite indication of the luminance boundary between the mesopic and the photopic range, probably only the stimulus with mean luminance of 10 cd/m 2 is on the upper border of the mesopic range. As we found difference between conditions of a mixture of mesopic and photopic luminances (lowest mean luminance conditions) and photopic conditions (highest mean luminance condition), we could hypothesize that the results could have influence of the contribution of rods to the color perception (Zele and Cao, 2015). Rods influence on the color perception decreasing saturation of spectral lights and improves discrimination at long wavelengths or impair the tritan axis ordering hues (Stabell and Stabell, 1977 Buck et al., 1998 Knight et al., 1998). We thought that probably our results cannot be explained by rods influence on the color perception, because it occurred only for one of the tested protocols (CDP), we have measured the color discrimination on the threshold level, we found no specific improvements or impairments of the color discrimination, and we used only the foveal vision.

When the difference between the maximum luminance and minimum luminance present in the luminance noise was kept constant (CDP) across the range of mean luminance used, subject's performance𠅌hromatic discrimination and RTs—improved with the mean luminance, following the Weber's contrast. In this case as the Weber's contrast of the noise decreases, subject's visual performance increases, reflecting the fact the Weber's contrast of the noise is a relevant parameter. In CDP protocol case, the visual response also depends on the chromaticity of the stimulus: when the stimulus was modulated on the (L−M) chromatic axis the response was independent of the mean luminance, not so in the case of stimuli modulated on the [S− (L + M)] chromatic axis. Meanwhile, when the Weber's contrast of the luminance noise was kept constant (CCP test protocol), subject's visual performance remained constant throughout the range of mean luminance tested with stimuli modulated either on (L−M) or [S− (L + M)] chromatic axes. The differences between RTs data for the two luminance noise protocol (CDP and CCP) were significantly lower at the lowest mean luminances of the noise, 13 cd/m 2 , compared to the highest one, 25 cd/m 2 . It seems that as the mean luminance of the noise decreases the dependence on the luminance noise protocol is not significant.

The range of Weber's luminance contrast of the noise used in this work was relatively high, between about 40 and 85%, well inside the dynamic range of visual pathways that combine low luminance contrast sensitivity with high color contrast sensitivity, such as the P or K pathways. As suggested by Souza et al. (2014), P pathway could be an adequate candidate to integrate luminance contrast information and color contrast information in the perception of pseudoisochromatic stimuli, such as those used in the current study, since P cells are very sensitive to red-green contrast and few sensitive to luminance contrast (Kaplan and Shapley, 1986 Lee et al., 1989a,b Lee et al., 1990, 2011), as well as K cells that decode blue-yellow information and can also contribute to the luminance perception (Ripamonti et al., 2009).

Independently of the mean luminance of the noise and noise protocol used there were differences in gain, between the (L−M)- and S-opponent mechanisms: greater for stimulus modulated along +L−M (0°) and −L+M (180°) chromatic axes than for stimuli modulated along [S− (L + M)] chromatic mechanism. At the same time the gain is greater for −S (270°) stimuli than for +S (90°) stimuli. This result is not new and are in agreement of previous reports by Parry et al. (2004) and Oɽonell et al. (2010). Sankeralli and Mullen (2001) found that the two opposing submechanisms +L−M (0°) and −L+M (180°) possess a close degree of symmetry in the weighting of their cone inputs. In comparison, RTs generated in response to −S stimuli also tended to be shorter than for +S stimuli at equal multiples above detection threshold. Moreover, they confirm other previous results that tritan system was more sluggish than the L/(L+M) system (e.g., Smithson and Mollon, 2004 Bompas and Sumner, 2008).

On the other hand, the chromatic discrimination threshold and RTs times represent different behavioral measures: both can be obtained in the same (detection to discrimination) task but the former refers to discriminability of the stimuli, whereas the latter to the chronology of discrimination. The two functions reflect change in luminance (achromatic) contrast in a similar way but are not identical (see e.g., Tiippana et al., 2001). Although, the discrimination threshold and RT are different performance measures, the results presented here show that the spatial luminance noise affects both in similar ways: RTs and color discrimination thresholds were dependent of the Weber's luminance contrast between maximum and minimum luminance of the luminance noise.

It is clear from the above stated that in order to compare results of different studies is important to specify how the luminance noise was generated and, if it has been created according a protocol like CDP we should indicate the condition of the stimulus presented

Finally, we think that other aspects of the luminance noise on the color vision perception can be questioned in future investigations, such as how the existence of the luminance noise on pseudoisochromatic stimulus can influence on the color discrimination or in the target detection. Other aspects of the luminance noise in pseudoisochromatic stimuli that potentially influence chromatic discrimination and, hence, segregation of the target from the field, can be investigated in the future, such as greater range of mean luminances and/or of luminance contrast.

Effects of absolute luminance and luminance contrast on visual discrimination in low mesopic environments

Recent research revealed considerable decline in visual perception under low luminance conditions. However, systematic studies on how visual performance is affected by absolute luminance and luminance contrast under low mesopic conditions (<0.5 cd/m 2 ) is lacking. We examined performance in a simple visual discrimination task under low mesopic luminance conditions in three experiments in which we systematically varied base luminance and luminance contrast between stimulus and background. We further manipulated eccentricity of the stimuli because of known rods and cones gradients along the retina. We identified a “deficiency window” for performance as measured by d’ when luminance was < 0.06 cd/m 2 and luminance contrast as measured by the luminance ratio between stimulus and background was below < 1.7. We further calculated performance-based luminance as well as contrast efficiency functions for reaction times (RTs). These power functions demonstrate the contrast asymptote needed to decrease RTs and how such a decrease can be achieved given various combinations of absolute luminance and luminance contrast manipulations. Increased eccentricity resulted in slower RTs indicative of longer scan distances. Our data provide initial insights to performance-based efficiency functions in low mesopic environments that are currently lacking and to the physical mechanisms being utilized for visual perception in these extreme environments.

Watch the video: Πλουτάρχου Περί του εμφαινομένου προσώπου τω κύκλω της Σελήνης (June 2022).


  1. Fekazahn

    I congratulate, an excellent idea

  2. Nemesio

    I sympathize with you.

  3. Covey

    I read it with pleasure

  4. Millen

    I'm sorry, but, in my opinion, mistakes are made. Let us try to discuss this. Write to me in PM.

  5. Subhi

    These are for! First time I hear!

  6. Hardyn

    everything can be

Write a message