Accuracy and Ethical Reporting
Driscol Reflection Piece
View Initial Post
What?
This discussion task centred on the ethical responsibilities of a data professional when analysis outcomes conflict with a client’s interests. Using the Whizzz cereal case, I explored whether presenting selectively favourable analyses, while technically leaving the data untouched, can still be considered ethical. My initial post focused on the idea that analytic choice itself carries moral weight, particularly when multiple valid paths exist and only the most flattering results are emphasised.
I argued that selective reporting does not simply reflect different “interpretations” of the data but actively shapes the conclusions others draw from it. Drawing on work by Gelman and Loken and Ioannidis, I framed this as a problem of narrative construction rather than data manipulation. The core position I took was that transparency is not achieved merely by avoiding fabrication, but by making analytic decisions, uncertainty, and limitations visible, especially when findings suggest potential harm.
So what?
Writing this post forced me to confront how closely the Whizzz scenario mirrors my own professional practice in the aviation assessment department. I routinely analyse exam outcomes for thousands of students across multiple curricula, cohorts, and attempts. The underlying data are fixed, but the way I summarise them is not. Choices such as whether to report a single aggregated pass rate, separate first attempts from re-sits, compare against previous semesters, or contextualise results after curriculum changes can materially alter how a faculty’s performance is perceived.
What struck me most was realising that these choices are ethically loaded, even when they are methodologically defensible. Presenting only an overall pass rate may mask systemic issues in first-attempt performance, while focusing narrowly on first attempts may unfairly ignore genuine improvement over time. As with the Whizzz case, the risk lies in allowing institutional or reputational pressures to determine which story is told. This reflection sharpened my awareness that “neutral reporting” is not a default state, it is something that must be actively designed and justified.
Now what?
Going forward, this activity has reinforced the importance of being explicit about analytic intent and scope in my reporting. When producing exam analysis, I am more conscious of framing primary indicators clearly, distinguishing exploratory breakdowns, and documenting why particular views of the data are being used. Where results are likely to reflect poorly on a department, I see greater value in contextualising rather than smoothing them away, for example by pairing outcomes with cohort size, curriculum changes, or historical trends.
More broadly, this reflection has strengthened my confidence in resisting purely cosmetic reporting. Just as Abi retains agency even when anticipating selective use by the manufacturer, I also retain responsibility for how far I am willing to let my analyses be simplified or repackaged. This task has helped me move from thinking about ethics as compliance with rules, to seeing it as an ongoing practice of judgement, transparency, and professional integrity in everyday analytical work.
References
- American Statistical Association (2022) Ethical Guidelines for Statistical Practice.
- Association for Computing Machinery (2018) ACM Code of Ethics and Professional Conduct.
- Gelman, A. and Loken, E. (2013) ‘The garden of forking paths’.
- Ioannidis, J.P.A. (2005) ‘Why most published research findings are false’, PLoS Medicine.