90
technical skills and contribute to safe and efficient
task performance. From another perspective, in the
context of vocational education and training (VET)
there is a focus on the development of transversal
competences (Ceitil 2016) or transferable generic
competences (Deist & Winterton 2005) that share
many of the characteristics of NTS. In Maritime
Education and Training (MET) domain, these
concepts represent an important complementary
approach of the NTS framework. Notwithstanding
the consolidation of Bridge Resource Management
(BRM) courses in MET programs, focused in the
development human behavioral and non-technical
skills, we still need to better validate the effectiveness
of such skills and training in safe maritime navigation
(Barnett et al. 2006, p.9; Oltedal & Lützhöft 2018, p.86;
Salas et al. 2006, p.410; O’Connor 2011, p.372). Still,
the association of NTS with safe and efficient
performance is widely discussed in the human-factor
literature (Grech et al. 2008; Oltedal & Lützhöft 2018;
Hetherington et al. 2006). On the other hand, the
reduction of navigation risks does not rely only on the
bridge team performance, since other organizational
issues must also be tackled (Manuel 2011, p.34;
Hetherington et al. 2006).
It is also relevant to note that both technical and
non-technical skills are inextricably intertwined, since
they cannot be separated (Flin et al. 2008; Barnett et al.
2006, p.5). Fjeld, Tved and Oltedal (2018) reviewed
how the NTS have been applied in the ship bridge
domain. After analyzing nineteen studies, they
identified five NTS: situation awareness (SA),
decision-making (DM), workload management,
communication, and leadership. However, they
suggest that bridge officers’ NTS are not sufficiently
explored, calling for a detailed taxonomy and better
understanding of the interconnections between the
cognitive and interpersonal skills.
1.2 Behavior markers
How can we verify that a given individual has the
required skills? Considering that competencies are in
the first instance behaviors (Ceitil 2016), in order to
classify a competency, we need a set of indicators or
Behavioral Markers. These Indicators are observable
nontechnical behaviors, in teams or individuals, that
contribute to superior or inferior performance within
a given working domain (Flin & Martin 2001;
Klampfer et al. 2001, p.10). Klampfer (2001) suggested
essential characteristics of good markers: only
behaviors operationalized through observable
indicators should be considered as the target of
evaluation; with causal relationship to performance
outcome; described in domain specific language;
using simple phraseology; and describing clear
concepts. Ceitil (2016) also raises the question of the
standardization of evaluation, to assure the objectivity
of the evaluation, implying that each competence
should have more than one verification element /
indicator. Formal assessment using behavioral rating
systems started with the assessment of the
effectiveness of Crew Recourse Management (CRM)
training of flight deck crew, and by the end of 90’s
they spread across several domains (Flin et al. 2008).
Apart from the prototype behavioral marker for naval
officers’ NTS designed by O’Connor and Long (2011)
and Conceição et al. (2017) for behavioral markers of
naval cadets training in simulator, few developments
are found with a firm employment of a marking
scheme for the Bridge Resource Management (BRM)
framework (Fjeld et al. 2018; Conceição et al. 2017).
1.3 Training in Simulators
Barnett et al. (2002) consider simulation a tool to solve
problems associated with risk and crisis management,
as well as for optimization of navigation team’s
resources. Elashkar (2016) claims that 58% of the skills
associated with resource management of a ship bridge
could be improved through simulation and training
in simulator. However, several issues need to be
addressed, such as the extension of skill transfer from
training environment to the working domain, the
effective assessment of the NTS, the association with
safe performance, the design of the simulator training
program (Ward, Hancock, & Williams in Ericsson et
al. 2006; Pekcan et al. 2005). Simulators are designed
in order to reproducing parts of a real situation
allowing its user to practice and demonstrate skills in
a controlled environment ensuring integration into
the physical context of the task (Hontvedt 2015, p.6).
Studies indicate that individual’s performance in a
simulation context is a viable source to predict the
performance of the same individual in a real context
(Mjeldea et al. 2016). However, Sellberg (2016) adds
that despite the recognized capabilities of simulators
in the learning process, the organization and
conduction of the training process is more important
than the capabilities of the simulator itself. The need
to develop and establish adequate training models to
enable and optimize the use of simulators is
fundamental to an effective training (Sellberg 2016).
From an educational perspective, using a
simulator entails teaching technical skills, developing
coordination and teamwork, and evaluating
individual and team performances (Hontvedt 2015,
p.5). Therefore, the simulator should be properly
adapted to the educational context, i.e. the level of
realism of the simulator must be weighted according
to the training objectives and being too close to reality
can prevent the identification and / or evaluation of a
specific component. According to Sellberg (2017), a
higher degree of realism requires more structured
training, enabling a close connection between training
goals and the particularities of the individuals'
performance during the sessions.
The implementation of a set of clear and coherent
evaluation criteria that allow the quantification of a
subject's performance, covering the whole range of
solutions that can be adopted to solve a problem, is a
serious challenge. In this sense, Sampson et al. (2011)
alerts to the problem that instructors in the area of
maritime navigation have little knowledge and
reveals great uncertainty in the area of assessment
skills in simulated environment. Salas et al. (2002) had
already discussed this misperception that subject
matter experts should drive the design of training,
suggesting that they should work in collaboration
with teaching/learning experts.
Elashkar (2016) proposes that evaluation in
simulators should comprise the following elements: