Brilliant summary - mirrors my thoughts and experience quite closely.
Validation/testing has always been a challenge, especially given that dashboards are by definition quite “full stack” implementations where testing just the front end or back end is not sufficient and testing both in isolation can also often be challenging due to the huge possible variations in input data.
Mocking data is also hard because dashboards may also lean a lot on database-side calculations/filtering.
All of this has lead me to take quite a full-fat approach to testing dashboards, by using a real DB populated with test data, and testing the full complete application stack (driven by something like Playwright or Cypress) as well as more granular unit tests where a mocked data layer may be used.
I’m also looking at introducing visual regression tests next time I work on this kind of thing. The visual aspects of dashboards can easily drift over time even if the data is correct. You’re often theming charting libraries for example and the compliance of the theme can drift slightly if you update the library without really checking every detail of the visual appearance/layout every time. Or you may not even notice the “visual drift”…
> Validation/testing has always been a challenge, especially given that dashboards are by definition quite “full stack” implementations where testing just the front end or back end is not sufficient and testing both in isolation can also often be challenging due to the huge possible variations in input data.
Constantly evolving but I've always tried hard to keep calculations away from the display tools. So, I put lots of things in SQL SPs, or in Python, or more broadly in tooling that allows me to recreate the summary data without the front-end. My nightmare is having to check a PowerBI calc that itself is based on an underlying SQL calc. Which one is wrong? Now spend twice as long figuring it out!
> The visual aspects of dashboards can easily drift over time even if the data is correct. You’re often theming charting libraries for example and the compliance of the theme can drift slightly if you update the library without really checking every detail of the visual appearance/layout every time. Or you may not even notice the “visual drift”..
Love it, very smart. Why I prefer tables for many things too - one less thing to maintain and check.
PowerBI is a WHOLE other can of fish - I haven't spent long enough with it to figure out how you build a test suite around that - but it sounds tricky!!
Validation/testing has always been a challenge, especially given that dashboards are by definition quite “full stack” implementations where testing just the front end or back end is not sufficient and testing both in isolation can also often be challenging due to the huge possible variations in input data.
Mocking data is also hard because dashboards may also lean a lot on database-side calculations/filtering.
All of this has lead me to take quite a full-fat approach to testing dashboards, by using a real DB populated with test data, and testing the full complete application stack (driven by something like Playwright or Cypress) as well as more granular unit tests where a mocked data layer may be used.
I’m also looking at introducing visual regression tests next time I work on this kind of thing. The visual aspects of dashboards can easily drift over time even if the data is correct. You’re often theming charting libraries for example and the compliance of the theme can drift slightly if you update the library without really checking every detail of the visual appearance/layout every time. Or you may not even notice the “visual drift”…