Sceneform != Filament. Sceneform was an AR-oriented framework that used Filament as its renderer. Filament itself is very much alive and under active development.
Like I said, Sceneform used Filament (not the other way around). I know of a few Google products that use Filament but having left the company I'm not sure what I'm at liberty to talk about unfortunately. Btw, I'm one of the authors of Filament :)
You are absolutely right, this diagram is misleading and I've been meaning to replace it with one of my own since forever but it has fallen deep at the bottom of long list of things to do (including many, many things I would like to describe in that document, like our approach to transmission/absorption/refraction, our post-processing pipeline, etc.).
Oh, no worries. Lists of doables are infinite, and life is very not. It's merely something I reflexively note out of long habit.
Fwiw, one remediation which appeals to me, when using flawed content, is adding a "bogus" tag. As in "Figure N Mumble (source WP) Somewhat flawed." Or sometimes "Bogus <attribute or issue>". So the reader maybe gets a heads-up that there's a known issue - a "first, do no harm" thing. Modulo esthetic constraints, and I've no idea if it actually helps. And it might be phrased more accessibly. I dont know of any associated education research.
Big picture, societal-level impacts of commonly flawed content seem unlikely to improve without being addressed systemically, and so don't seem a priority focus when pursuing local excellence. For example, students are told the Sun is yellow in Kindergarten, and repeatedly thereafter, with only a few later getting an "oops, nope, our bad" in astronomy grad school discussion of common misconceptions in astronomy education content... and careful avoidance of yellow Suns in say one weather app seems unlikely to move that needle much.
That subsurface scattering model is not what you would use for skin, etc. It's a fairly simple approximation similar to what Unreal and Frostbite have used (use?) in the past to cheaply approximate somewhat translucent materials. It's mostly still a TODO because it's not that interesting.
The engine itself only has two external dependencies: STL (internal use only, not part of the APIs) and robin-map. The (optional) Vulkan backend adds a third one: vkmemalloc.
The host tools (material compiler, etc.) do have more dependencies indeed.
There are several spectral renderers out there, such as Weta Digital's Manuka. I don't know if they bother with parts of the EM spectrum that are outside of the visible range though. I imagine UVs can be important to model in some situations.
Handling non-visible spectrum isn't much of an issue, after all the wavelength used when path tracing can be whatever (some have used path tracing for sound). Though getting realistic data for non-visible parts may prove tricky depending on the material.
IIRC the issue is that if you can ignore fluorescence, then reflection is simply an element-wise multiplication of the incoming light at the wavelengths under consideration[1] with the reflection coefficient of the material at those wavelengths. With fluorescence, that turns into a matrix multiplication, with obvious speed implications.
If only a single wavelength is considered at a time, then the wavelength must change upon reflection, otherwise there's no way for the fluorescence to occur. That can also have performance issues, for example conversion coefficients to/from regular color spaces needs to be recalculated.
At least that's my understanding having worked on a physically-based renderer which did do spectral rendering but not fluorescence.
[1]: using for example binned wavelengths or stratified wavelength clustering.
When is Wenzel going to release Mitsuba 2? The current version of Mitsuba is in bug-fix-only mode. :( I loved Mitsuba's python bindings, made it super easy to pragmatically do cool renders.