Evaluating and reviewing (reflexive) TA research: A guide for editors and reviewers
After reading far too many manuscripts which either mash-up different versions of TA, or say they followed ‘Braun & Clarke’ and then do something completely at odds with what we’ve recommended, we developed some detailed guidelines intended for editors and reviewers who receive manuscripts that use ‘thematic analysis’. These guidelines expand and clarify the points we initially made in our 15-point checklist for quality (reflexive) TA, and are useful beyond the editing/reviewing context.
The guidelines were published in our paper on quality in TA in the journal Qualitative Research in Psychology. We encourage you to share with editors, reviewers, and others who might find them useful.
A tool for evaluating TA manuscripts for publication: Twenty questions to guide assessment of TA research quality
Adequate choice and explanation of methods and methodology
- Do the authors explain why they are using thematic analysis (TA), even if only briefly?
- Do the authors clearly specify and justify which type of TA they are using?
- Is the use and justification of the specific type of TA consistent with the research questions or aims?
- Is there a good ‘fit’ between the theoretical and conceptual underpinnings of the research and the specific type of TA (i.e. is there conceptual coherence)?
- Is there a good ‘fit’ between the methods of data collection and the specific type of TA?
- Is the specified type of TA consistently enacted throughout the paper?
- Is there evidence of problematic assumptions about, and practices around, TA? These commonly include:
- Treating TA as one, homogenous, entity, with one set of – widely agreed on – procedures.
- Combining philosophically and procedurally incompatible approaches to TA without any acknowledgement or explanation.
- Confusing summaries of data topics with thematic patterns of shared meaning, underpinned by a core concept.
- Assuming grounded theory concepts and procedures (e.g. saturation, constant comparative analysis, line-by-line coding) apply to TA without any explanation or justification.
- Assuming TA is essentialist or realist, or atheoretical.
- Assuming TA is only a data reduction or descriptive approach and therefore must be supplemented with other methods and procedures to achieve other ends.
- Are any supplementary procedures or methods justified, and necessary, or could the same results have been achieved simply by using TA more effectively?
- Are the theoretical underpinnings of the use of TA clearly specified (e.g. ontological, epistemological assumptions, guiding theoretical framework(s)), even when using TA inductively (inductive TA does not equate to analysis in a theoretical vacuum)?
- Do the researchers strive to ‘own their perspectives’ (even if only very briefly), their personal and social standpoint and positioning? (This is especially important when the researchers are engaged in social justice-oriented research and when representing the ‘voices’ of marginal and vulnerable groups, and groups to which the researcher does not belong.)
- Are the analytic procedures used clearly outlined, and described in terms of what the authors actually did, rather than generic procedures?
- Is there evidence of conceptual and procedural confusion? For example, reflexive TA (Braun & Clarke, 2006) is the claimed approach but different procedures are outlined such as the use of a codebook or coding frame, multiple independent coders and consensus coding, inter-rater reliability measures, and/or themes are conceptualised as analytic inputs rather than outputs and therefore the analysis progresses from theme identification to coding (rather than coding to theme development).
- Do the authors demonstrate full and coherent understanding of their claimed approach to TA?
A well-developed and justified analysis
- Is it clear what and where the themes are in the report? Would the manuscript benefit from some kind of overview of the analysis: listing of themes, narrative overview, table of themes, thematic map?
- Are reported themes topic summaries, rather than ‘fully realised themes’ – patterns of shared meaning underpinned by a central organising concept?
- Have the data collection questions been used as themes?
- If so, are topic summaries appropriate to the purpose of the research?
- If the authors are using reflexive TA, is this modification in the conceptualisation of themes explained and justified?
- Would the manuscript benefit from further analysis being undertaken, with the reporting of fully realised themes?
- Or, if the authors are claiming to use reflexive TA, would the manuscript benefit from claiming to use a different type of TA (e.g. coding reliability or codebook)?
- Is a non-thematic contextualising information presented as a theme? (e.g. the first theme is a topic summary providing contextualising information, but the rest of the themes reported are fully realised themes). If so, would the manuscript benefit from this being presented as non-thematic contextualising information?
- In applied research, do the reported themes have the potential to give rise to actionable outcomes?
- Are there conceptual clashes and confusion in the paper? (e.g. claiming a social constructionist approach while also expressing concern for positivist notions of coding reliability, or claiming a constructionist approach while treating participants’ language as a transparent reflection of their experiences and behaviours)
- Is there evidence of weak or unconvincing analysis such as:
- Too many or two few themes?
- Too many theme levels?
- Confusion between codes and themes?
- Mismatch between data extracts and analytic claims?
- Too few or too many data extracts?
- Overlap between themes?
20. Do authors make problematic statements about the lack of generalisability of their results, and or implicitly conceptualise generalisability as statistical probabilistic generalisability (see Smith, 2018)?
References
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101.
Smith, B. (2018). Generalizability in qualitative research: Misunderstandings, opportunities and recommendations for the sport and exercise sciences. Qualitative Research in Sport, Exercise and Health, 10(1), 137-149.