In light sensitivity circles, a commonly held view is that the directionality of LED light is completely novel, causing glare and harm.
Their mechanistic explanations for how that happens violate the consensus framework for optics (and sometimes plain logic as well).
On the other hand, many of us do have the perception that LEDs are indeed different. Glare has increased dramatically since their introduction as general light sources and a minority has aversive reactions to them.
Research results and academic statements within the consensus framework have not yet accounted for the unique problems caused by LED sources. This may be for two reasons:
We could get a clear explanation, but we're not looking in the right places.
My best bet is luminance — a crucial concept, yet it is rarely studied, measured, or regulated in lighting applications. The few available studies on the relationship between glare and luminance remain inconclusive.
Other known light properties might hold the key as well (see my earlier blog post on The Factors of Glare). Here, I'll focus on luminance as an area in great need of systematic and honest research.
We cannot get a clear explanation within the standard framework — the framework itself is missing something.
This is the claim by many light sensitive people, even though their explanations may be insufficient to support it. But what if there is an alternative framework in which some version of these explanations do hold up?
LEDs emit light in one hemisphere according to a cosine (Lambertian) distribution, where intensity is highest along the surface normal and falls off as we move away from it, proportionally to the cosine of that angle.
While the distribution itself is a simple and old concept (which also describes, e.g., light simply reflected from a wall), it is unique for a light source: most other sources have a more omnidirectional (~ isotropic) global light intensity distribution.
Gaertner, A. A. (2002). LED measurement issues. Institute for National Measurement Standards, National Research Council of Canada, Ottawa, Canada, 1-15.
Let's assume that in the above comparison, our observer stands on the right side of the images — where intensity is highest from the LED (and equal to other directions with the isotropic source).
Illuminance — the amount of light reaching the observer — will be the same in both cases, therefore, this cannot account for any difference in glare.
Sure, this also means that LEDs are more efficient at delivering a given amount of light in directional applications. Abusing this property can lead to overlighting (and thus excess glare as well). However, illuminance is the most measured and controlled property in lighting and certainly not a sufficient explanation for why LEDs appear more glaring.
While the eyes are very adaptable when it comes to light levels (illuminance at the eye), they cannot resolve high contrast (differences in luminance) in the visual field. Correspondingly, we poorly judge absolute light levels and we are extremely sensitive to minor luminance differences (perceived as brightness).
The above graph does not tell us how the two sources would compare regarding luminance: that depends on their size and the distribution of luminance across their apparent surface from the observer's perspective.
Putting spectral considerations aside (as this post only deals with spatial questions), the luminance of a light source depends on:
how much light is emitted (flux),
in which directions,
from how large of a luminous surface, and
how that light emission is distributed across the luminous surface. If the distribution is completely homogeneous, then maximum luminance is minimized (and equal to the average). If there is a large variance, then maximum luminance can greatly differ from what the average would show.
Below is a table indicating usual luminance ranges for common light sources.
Yes, LEDs are on the brighter side, but this property alone should not make them an outlier.
Here's the catch though: light intensity distribution affects luminance on the fixture level.
In many applications, illumination from the source is projected onto some optical element where it creates a new luminous surface with its own luminance!
Generally, this is what matters in real life, what we see, e.g., when we look at the headlights of a car.
Expanding on our earlier graph, if we place an isotropic vs. a Lambertian source behind a lens, the latter has higher maximum luminance across the effective projected luminous lens area (the surface the observer sees).
Illustration: How source distribution (isotropic vs. cosine) creates luminance distribution differences on the effective projected luminous lens area. This is a simplified and exaggerated example for illustration purposes; the difference is not large (the former scales with the cube, while the latter with the fourth power, of the cosine of the angle). Though it may contribute, it likely cannot account for all the reported issues with LEDs.
This is a direct consequence of the cosine distribution. Again, without the other optical elements involved, we cannot directly see that distribution of an LED: it only becomes visible in this case because it first falls on a surface (the lens in front of it), from where it projects into the viewer's eyes. So here, the luminance distribution (projected onto the observer's retina) will be a bell shaped pattern.
If we simply look at a bare LED, we'll see a very different distribution! (An important point that I'll get back to.)
A bell curve distribution is a known trigger of glare perception. This occurs because eye scatter causes any strong light source to appear Gaussian on the retina, overpowering nearby objects of lower luminance. Since vision mainly deals with contrast and not absolute light levels, any Gaussian-like distribution is therefore a reliable cue of a source that may be dangerously bright and should be avoided — hence eliciting an aversion response (discomfort glare).
So, the cosine distribution from an LED can indirectly result in more glare through higher maximum luminance on the fixture level and a bell shaped (Gaussian-like) luminance distribution across the effective projected luminous lens area. More studies are needed to determine the real-world significance of this effect, though existing research confirms the principle is measurable.
This magnitude, even combined with the glare illusion effect, is quite unlikely to explain the whole issue, but who knows? In the two decades of LED hype, no lighting lab wanted to assign 3 months of a PhD student's time to run a simple study and test it. Go figure.
Illustration: A Gaussian luminance pattern
The claim by those on the light sensitive side is that direct viewing of an LED, somehow through the cosine distribution, is causing glare. This has been visualized as a modeled cross section of the luminance distribution in front of the light source — which, similarly to the above illustration, will be a bell shaped pattern.
The issue is: there has never been any explanation proposed for how this could be perceived at all! We never see light distributions in empty space, only the luminance of surfaces. And the luminance pattern of an LED is not Gaussian-like: in general, it is a more or less homogeneous rectangle.
Tyukhova, Y., & Waters, C. (2014). An assessment of high dynamic range luminance measurements with LED lighting. Leukos, 10(2), 87-99.
That is what gets imaged onto the retina, not some modeled luminance distribution in empty space between the source and the eye. And if the eye focused in front of the light source, it would still not see the luminance distribution at that location in space: the retina then simply receives a blurred version of the homogeneous square of the LED. That blur further reduces maximum luminance (on the retina), instead of the hypothesized perception of the high maximum luminance in the center of the Lambertian — and invisible... — light field.
Again, we are back to the basic issue that's hard to wrap one's mind (and eyes) around: we don't see light, we only see illumination.
Despite many years of debates around this topic, I have yet to see any attempt at explaining why it would make sense, from perception's point of view, to model a luminance distribution in space away from the source.
If there is one, that would be groundbreaking. In the last section of this post, I'll list some wild ideas about this...
So, if luminance could be the key to understanding and solving the glare problem with LEDs, why has two decades not been enough to do so?
Yes, it is harder to measure than most other light properties, but that should not be a huge hindrance. Not only can better equipped labs do that easily, manufacturers already have such data for most lamps (they just usually don't share it, since they are not required to).
Luminance is rarely part of regulatory or guideline requirements. Therefore, it is rarely studied. Since there's little data on its relationship to glare, common glare models do not incorporate it. And if the best models and revered review papers don't deal with luminance as a glare factor, then why would it become part of any regulation?
Veto, P. (2024). Comment on: Predicting discomfort from glare with pedestrian-scale lighting: A comparison of candidate models using four independent datasets, by Abboushi, Fotios and Miller. Lighting Research & Technology, 56(3), 247-249.
Underneath the related political and financial motives, there is a technical reason for this vicious circle: a "small source" threshold that is commonly assumed to negate any need for luminance measurements or regulation.
Here's how it works: a small source is one where the retinal image size only depends on the resolution of the eye, not the actual size of the source. In other words, it is so small that its size doesn't matter any longer, because the eye scatters light enough so that its retinal image will be some larger size anyway.
In laser safety studies, this size is considered to be around 1.5 mrad (angular subtense). A similar threshold of 0.3 degrees (visual angle) is often assumed for perception — based on a miniscule literature (i) that does not demonstrate such a threshold for a single source, (ii) where studies with divergent conclusions are excluded, and (iii) where none of the studies have examined this question directly and parametrically.
Even if the above threshold is perfectly sufficient for laser safety applications, perception is a completely different animal — so why would we just copy that threshold for vision? The eye is not a passive camera with a given "resolution" and the perceived image hardly ever reflects the point spread function of the eye. It is a dynamic process where constant eye movements multiply the effective resolution and where all kinds of corrections help us have a much clearer image than what the optics of the eye would passively allow.
As a consequence, the actual "resolution" of the eye is about two orders of magnitude greater than the above limit!
So why would we average all luminance data across a size that is a hundred times larger than what we could theoretically still see?
Because consensus. We are back to the circular logic.
It's pretty simple really. A large enough parametric study using a single source could show if and where a realistic small source threshold would apply for discomfort glare with sources of different sizes and maximum apparent luminance values.
Aside from such a lab-based measurement, more luminance data from real-world applications would also help see more clearly. This is getting easier with exponentially cheaper availability of digital imaging methods.
With this knowledge, we'd know much better if luminance explains the glare issues with LEDs. If not, it has to be some other property.
So what if luminance cannot explain the excess glare? Let's assume that we test two light sources that are identical in spectrum, luminance, and size, but one had an isotropic and the other a cosine spatial distribution. If we found that the latter would cause more glare (vindicating the light sensitivity circles' argument) — how could that happen mechanistically?
We think about our vision as fundamentally two-dimensional: the retina is a surface that receives a projection of the visual field. Then, to form a three-dimensional image on the higher perceptual level, cues of depth are added from binocular disparity, longitudinal chromatic aberration, blur, relative motion, size constancy, etc.
A light field camera works by measuring both intensity and direction, enabling 3D reconstruction.
Binocular vision operates somewhat similarly, although with limited sensitivity. If this was the reason behind the glare issues with LEDs, then monocular viewing should eliminate them. Since that does not happen (at least not completely), we can rule this out as a simple explanation. It is also unlikely that a single eye could effectively work as a light field camera (although neural tissue in front of the receptors likely acts in such ways, to a limited degree).
Theoretically, our retina is also capable of sensing magnetic fields, but we know nothing about its relevance to anything in practice, let alone to the LED issue at hand.
What else could it be?
We can see light polarization and it affects visual comfort. Can we see spatial structure in light in some further ways? If yes, how are wavefronts from LED light structured differently so that such perception is possible?
One idea is based on the notion that "point sources" evoke effects that are exclusively linked to certain fundamental geometric shapes.
LEDs are frequently referred to as point sources. This is usually done without any reference to what definition is used and I am always puzzled seeing this phrase, especially in lighting research journals.
Another context, where the term also tends to show up, is how a cosine shape is formed in space for a magnetic field.
Magnetic field
If we squint, the radial component of a magnetic field shows similarities to LED light fields, as both involve a cosine-dependent angular distribution. But their spatial decay is different — and so is everything else, viewed from the current consensus framework, including the origin and nature of these two kinds of fields.
Basic geometries are amusing and the way one turns into another feels fundamental across different domains of physics and life. They also hold the danger of false analogy fallacies though: without causal/mechanistic understanding, they can lead astray.
Is this analogy relevant in lighting? And can we ever see spatial organization within a light field or are we affected by it in some other ways?
Extreme examples are obvious: a source with high coherence creates visible interference patterns. We can see speckle patterns in small and powerful general light sources (like tail lights or even headlights of modern vehicles), but we do not assign much significance to this — even though this was not the case until a decade or two ago. The speckles are also not represented in any luminance measurement — even though the maximum luminance of a speckle is much higher than the average of the source and this also increases perceived brightness.
No. We can only perceive second-order consequences of light intensity distributions. Once these form, they fit within the concept of illumination and are already well understood. We have simply not yet done the right experiments, using these standard measures, to understand why contemporary lighting has resulted in an increase in complaints about glare and sensitivities.
Yes. We don't yet know how, but once discovered, this would change everything and potentially explain mysterious adverse reactions. It would also vindicate those who, in a search for answers to their own struggles, propose that LEDs are fundamentally different from other light sources.
I would not be surprised if a yet unknown dimension of light was discovered in the near future. One that likely links to properties of space in weird ways. This might also rhyme with the above discrepancy between the experiences of light sensitive people and our standard framework for lighting.