885
concerns an issue closely tied to gaining and
maintaining situation awareness (Conceição et al.,
2017). The examiner then are using starboard radar
display to point out the incorrect position, followed
bythepreferredpositionofthestudents’vessel(lines
14‐19).Thestudentrespondstothiswithadeepsigh,
andcommentsthathe“messedup”andis“tryin’to
salvage the situation” (lines 20‐21), and thus are
agreeingwiththeexaminer’sproblemdefinition“not
thinking ahead” in line 13 and suggested correct
alternative in lines 16‐19. The examiner closes the
intervention(line22)andas he leaves
theroom,the
student gasps, revealing his frustration with the
current situation (line 23). The student is however
abletocorrectthesituationandcontinuesthepassing
accordingtoprotocolandpassestheexam.
However, it is worth noting that the students in
both episodes passed the examination. Of the eight
students that were being corrected on their
performanceaftertheexamineridentifiedsomekind
of trouble with their TSS crossing during the
competencetest,sixpassedtheexaminationandtwo
failed.
5 CONCLUSIONANDDISCUSSION
Inthisstudywehaveexploredauthenticinstancesof
examiner‐studentinteractionsduringsimulator‐based
competence tests. While the instructors’ continuous
and on‐going process of monitoring, assessing and
correcting students during training is argued to be
essentialforfosteringstudentsintothemaritimework
practice in previous research, findings from the
currentstudyshowthatthiskindofinstructivework
alsotakesplaceduringindividual
certificationsinthe
simulator environment. The examiners interventions
duringcompetencetestsinthesimulatorfoundinthe
datacorpuswereorganizedasbriefcorrectionswith
clear directives for improvement. In that sense, they
differ from instructions during training, which are
oriented towards developing students’ professional
reasoning (e.g. Sellberg & Lundin,
2017; Sellberg &
Lundin, 2018). However, the students’ needing and
receiving instructional support during competence
tests in the simulator suggest that students are still
developingtheirprofessionalcompetenceatthispoint
in training. Hence, there are reasons to consider the
practice of assessing students that are only halfway
throughtheireducation
forthe purpose of receiving
professionalcertificates.
Althoughthisanalysisisbasedonasmallsample
of video recorded data, the data corpus offers a
complex and interesting starting point for analyzing
theexistingassessmentpracticesinMET.Inthisdata,
preliminary findings show that corrections during
competence tests are
regularly made, but not all
students are provided with this kind of support. In
regards to this, findings in the empirical data raises
criticalandimportantquestionsinregardstowhatit
meansto produceas“fairandobjective”assessment
as possible (Flin et al., 2003, p. 109). While the
examiners
work systematically as instructors to
supportstudents’learningthroughoutthecourse,the
overall goal of the competence test is to conduct
consistent, unbiased, and transparent assessments
(Øvergård et. al., 2017). Development of assessment
toolsthatsupportexaminersworkisonepossiblepart
ofthesolution,asproposedbyØvergårdetal.(2017).
Another part of the solution is to develop the
examiners’knowledge on how to conduct valid and
reliableassessmentsofperformanceinthesimulator.
In regards to this challenge, there are reasons to be
carefulbeforeputtingdifferentassessmentmodelsfor
rating non‐technical skills to use in MET
(cf.
Conceição et al., 2017). As pointed out in the
background,resultsfromaviationrevealanumberof
problems when using these models as ground for
making assessments (e.g. Mavin & Roth, 2014).
Hence, there is need for future studies that analyze
the current assessment practices to identify areas of
improvement, and develop a practice where
simulator‐based assessments of competence ensure
thevalidityandreliabilityofMETcertificates.
ACKNOWLEDGEMENTS
This research is funded by FORTE (Swedish Research
Council for Health, WorkingLife and Welfare) project no:
2018‐01198
REFERENCES
Conceição, V. P., Basso, J. C., Lopes, C. F., & Dahlman, J.
(2017).Developmentofabehaviouralmarkersystemfor
rating cadet’s non‐technical skills. TransNav:
InternationalJournalonMarineNavigationandSafetyofSea
Transportation.Doi:10.12716/1001.11.02.07
Emad, G., & Roth, W. M. (2008). Contradictions in the
practicesoftraining
forandassessmentofcompetency:
A case study from the maritime domain. Education+
Training.Doi:https://doi.org/10.1108/00400910810874026
Flin, R., O’Connor, P., & Crichton, M. (2008). Safety at the
sharp end: a guide to non‐technical skills. Aldershot
England:Ashgate.
Flin, R., Martin, L., Goeters, K‐M., Hörmann, H‐J.,
Amalberti, R.,
Valot, C. & Nijhuis., H. (2003).
Development of the NOTECHS (non‐tecnical skills)
system for assessing pilots’ CRM skills. Human Factors
andAerospaceSafety,3(2),97‐119.
Gekara,V.O.,Bloor,M.&Sampson,H.(2011).Computer‐
based assessment in safety‐critical industries: The case
of shipping. Journal of Vocational
Education & Training.
Doi:https://doi.org/10.1080/13636820.2010.536850
Ghosh,S,Bowles,M,Ranmuthugala,D.&Brooks,B.(2014).
Reviewing seafarer assessment methods to determine
the need for authentic assessment. Australian Journal of
Maritime & Ocean Affairs. Doi:
10.1080/18366503.2014.888133
Heath,C.,Hindmarsh,J.&Luff,P.(2010).Videoinqualitative
research:Analysingsocial
interactionineverydaylife.SAGE
PublicationsLtd,London.
Hontvedt, M. (2015). Professional vision in simulated
environments—Examiningprofessional maritimepilotsʹ
performance of work tasks in a full‐mission ship
simulator. Learning, Culture and Social Interaction. Doi:
https://doi.org/10.1016/j.lcsi.2015.07.003
Mavin,T.,&Roth,W‐M.(2014).Aholistic viewofcockpit
performance:
Ananalysisoftheassessmentdiscourseof
flight examiners. International Journal of Aviation
Psychology. Doi:
https://doi.org/10.1080/10508414.2014.918434
Roth, W‐M. (2015). Flight Examiners’ Methods of
AscertainingPilotProficiency.TheInternationalJournalof
AviationPsychology.Doi:10.1080/10508414.2015.1162642