585
1 INTRODUCTION
The maritime sector is characterized by an increasing
amount of digitisation and automation. Automation
of onboard systems plays an increasingly relevant role
in the provision of safe operation which is marked by
the growing development of autonomous navigation
solutions [1]. These solutions can be beneficial to
analyse situations proactively and react quickly,
leading to an increase in the overall safety of
navigation operations. Furthermore, digitisation and
automation play an essential role in the development
of MASS. The underlying systems are realised
through different techniques which can follow simple
rule-based approaches up to more complex ML-based
techniques.
In this context, AI-based systems (hereinafter
called systems) are noteworthy due to their promising
capabilities. However, to support the
conceptualization, development and implementation
of these systems and further enable their verification,
processes in the maritime regulatory bodies and
auditing authorities have to be adapted. The challenge
in understanding these systems is their technical
structure due to which some systems and their
VerifAI: Framework for Functional Verification of
AI
-based Systems in the Maritime Domain
T
. Stach
1
, P. Koch
1
, M. Constapel
1
, M. Portier
2
& H. Schmid
2
1
Fraunhofer Centre for Maritime Logistics and Services CML, Hamburg, Germany
2
Federal Maritime and Hydrographic Agency of Germany, Hamburg, Germany
ABSTRACT: With the continuous emergence and steady development of new technologies the way for
Maritime Autonomous Surface Ship (MASS) is being paved. However, this manifold of available and imminent
technologies challenges regulatory bodies and auditing authorities. Technologies which make use of Artificial
Intelligence (AI), in particular Machine Learning (ML), play a special role. On one hand, they are not covered by
current regulations or audit processes and, on the other hand, they may represent black boxes whose
behaviours are not readily explainable and thus impede audit processes even further. In an upcoming study
titled VerifAI the authors focus on this gap within European and German regulatory bodies and auditing
authorities. The technological scope lies on MASS-related products which rely on partially or fully AI-based
systems. In the present article the original authors summarize the outlined study. The authors review the
current regulatory status concerning audit processes and the market situation concerning available and
imminent (partially) AI-based systems of MASS-related products. To close the gap a conceptual, integrated
framework consisting of a Safety Guideline for the manufacturers and a Verification Guideline for the auditing
authorities is presented. The framework aims to give regulatory bodies and auditing authorities an overview of
necessary steps for robust verification of safe products without hindering innovation or requiring in-depth
knowledge about the (black box-like) systems. The results are condensed into recommendations for actions,
listing the most important results, and proposing entry points as well as future research in the field of verifying
(partially) AI-based MASS-related products.
http://www.transnav.eu
the
International Journal
on Marine Navigation
and Safety of Sea Transportation
Volume 18
Number 3
Septem
ber 2024
DOI: 10.12716/1001.18.03.1
2
586
behaviours can be considered a black box, hence not
transparent or explainable [2], [3].
The present article summarizes the study VerifAI
which has been carried out by the German Federal
Maritime and Hydrographic Agency and the
Fraunhofer Center for Maritime Logistics, i.e. the
authors of the present article. In this study the authors
present the current regulatory status in Europe and,
more specifically, Germany. Available and imminent
AI-based MASS-related products (hereinafter called
products) are investigated as part of a market study. It
is outlined how current audit processes do not cover
these systems, and how the introduction of feasible,
scalable and robust audit processes faces a number of
challenges:
Generalization of the operational domain of the
systems
Data quality management in the development and
data procurement during audit processes
Increasing variety of complex and novel system
architectures
To mitigate this gap, the authors developed a
conceptual and integrated process framework which
consists of a Safety Guideline for the manufacturer
and a Verification Guideline for the auditing
authority. The framework follows a model-agnostic
approach to cover the wide variety of available and
imminent AI-based systems. The focus of the
framework lies in answering the question of
whether” and not how” a system is functioning.
The present article is structured as follows: In
Chapter 2 related work is presented and compared to
the present article. Subsequently, in Chapter 3 the
current regulatory status and market situation are
summarized and eventually the gap between these
two is demonstrated. A proposal on how to close this
gap is presented in Chapter 4. The proposed
framework can be seen as the novel and main
contribution of this article. Subsequently, in Chapter 5
recommendations for actions for regulatory bodies
and auditing authorities are derived. The article closes
with the conclusion (cf. Chapter 6) and future work
(cf. Chapter 7).
2 RELATED WORK
Progress in the field of MASS audit, more precisely
testing and verification, can be divided into two parts:
firstly, identifying relevant regulatory processes to
audit marine equipment and, secondly, looking at
state-of-the-art techniques on making the behaviour of
such systems auditable.
Research in the regulatory field was constrained to
the European Union (EU) as the scope of the
investigation is a framework with the aim to be
compatible with the existing audit framework in
Europe. The current state of regulations on marine
equipment is mainly defined in the Maritime
Equipment Directive (MED) [4] outlining relevant
standards applicable to pre-defined types of
equipment. The considered AI-based systems are not
referred. Therefore, it is not possible to evaluate the
applicability of current regulations for the audit of
MASS. Future standardisations could be derived from
imminent regulations such as the Artificial
Intelligence Act (AI Act) of the EU [5]. However,
neither a timeline nor a precise scope can be clarified,
by now.
Technical research in the field of audit of MASS is
also limited due to the novelty of the underlying
products or systems. Early developments of a
framework can be seen in the work of Rokseth et al.
who describe a methodological approach to assess the
overall safety of an autonomous system in the
maritime domain [6]. This approach can be primarily
applied for the risk assessment of a system but gives
no indication of regulatory conformity or methods to
be considered.
The approach of Ringbom [7] associates regulatory
methods to the level of autonomy as outlined by the
International Maritime Organization (IMO) in [8] and
illustrates the main challenges of the missing
formalization. This perception coincides with the
challenges identified by the authors of the present
article. The focus of the present article is on level 1
and 4 systems according to IMO. Remotely operated
systems are explicitly not evaluated. Finally, Ringbom
seeks to clarify some of the key features and
terminology related to automation in shipping as well
as to illustrate how the different concepts are
interconnected. A proposed framework for
distinguishing the key elements involved in the
regulation of autonomous ships is outlined. The
regulatory challenge is assessed through an
examination of specific legal hurdles and past practice
of the IMO in regulating automation in shipping, with
a particular focus on bridge operations. It states that a
solid regulatory framework for autonomous shipping
operations should be able to deal with variations and
should not be limited to a specified level of manning
or autonomy.
3 STATUS QUO
The current situation of both, the regulatory status
and the market situation, indicate that AI-based
systems are not considered, yet. More precisely
current regulatory processes do not cover black box-
like systems whose behaviour is not explicitly
explainable.
A prominent example for a black box model is a
neural network. A neural network consists of layers of
nodes where each node represents some form of
function and is highly interconnected with other
nodes. Such a model’s behaviour is not readily
transparent nor explainable and its audit is not
covered by current regulatory processes.
This gap between the regulatory status and the
market situation is outlined in the following
subchapters.
3.1 Regulatory status
According to the International Convention for the
Safety of Life at Sea (SOLAS) [9], the market
introduction of a novel equipment for a ship requires
testing and verification of its manufacturing process,
587
functionality and operation on board the ship. In
particular, when equipment is approved with the aim
of autonomising processes on seagoing ships,
comprehensive testing is necessary to ensure the
operational safety.
The verification of the safety of marine equipment
in the EU is carried out in accordance with the MED
by means of a conformity assessment procedure by
notified bodies. Notified bodies are institutions
accredited by national authorities which are
mandated to carry out verification procedures. The
process of ensuring the conformity of a product to be
placed on the European market takes place in terms of
its design, construction and performance. The EU
outlines the conformity assessment process with its
possible testing modules and options under the
Marine Equipment Regulation [4]
The European AI Act, which is currently being
drafted, will have a significant influence on the
development of AI-based systems. However,
according to Article 2, only Article 84 (evaluation and
review) will apply to safety-critical AI-based systems
that fall within the scope of the MED. In accordance
with Article 78 of the AI Act, in order to meet the
requirements the MED shall be amended [5].
3.2 Market situation
As already shown in the review of the regulatory
status, currently, there is no suitable procedure for the
audit of AI-based systems. Therefore, the safety of
these systems is in the hands of their manufactures.
Obviously, manufacturers do not disclose sensible
information about their systems as this could lead to
competitive disadvantage.
Figure 1. Published patents accessible with the combined
search term ship” and autonomous” via Google
Patents [10].
This lack for comprehensibility of the systems
proper functioning is continuously becoming more
serious as the diversity, technical maturity and also
the number of AI-based systems advances. The
increasing number of autonomous systems brought to
market can be shown by the growth of international
patent applications in the MASS sector. Figure 1
depicts this growth in patent applications published
annually from 2000 to 2022 for the combined search
term ship” and autonomous”. The clearly visible
trend may be an indication that the number of AI-
enabled products entering the market each year will
continue to increase. With reference to the lack of
regulatory procedures or auditing processes identified
in Chapter 3.1, there is a need to establish appropriate
testing and certification processes.
To gain an understanding of the product or system
types which are not covered by existing audit
processes, a market study was carried out within the
VerifAI study with a focus on available systems or
those close to market readiness. In total 18 systems
were identified and subsequently categorized
according to their field of application and sensors
used as data sources. The resulting tabular overview
can be found in the upcoming full-text study. The
results from the market study show that frequently
used sensors like Radar [11] and Automatic
Identification System (AIS) [12] do follow existing
information exchange standards. By contrast, every
identified system relied on camera systems from the
visible red, green, blue (RGB) range, despite any
standards. In particular, the use of camera systems, be
it RGB, infrared or other ranges, and subsequent AI-
based processing poses hurdles due to a lack of
standards in the maritime context. Consequently,
audit processes for such AI-based systems cannot be
standardised and the audit takes additional effort
compared to systems with standardised information
exchanges.
4 CONCEPTUAL FRAMEWORK
In this chapter, our framework is presented which
supports at closing the gap between the illustrated
regulatory status and market situation. The proposed
approach consists of two guidelines:
Safety Guideline for the manufacturer.
Verification Guideline for the auditing authority.
Figure 2. Concept of the Safety Guideline based on three
process stages: Formalisation, Regulations and Data & Model.
588
The integration of both guidelines makes sure that
through its life cycle the AI-based system meets audit-
facilitating requirements. Thus, the two goals of the
Safety Guideline (cf. Figure 2) are: 1) ensure the
auditability of the (black box-like) AI-based system
and 2) the development of a sufficiently safe system
with chance of successful verification. It is
recommended that the manufacturer takes the Safety
Guideline into account early in the life cycle, i.e. in the
concept phase.
The Verification Guideline (cf. Figure 3) is directed
at the auditing authority. It follows two goals: 1) audit
of proper functions in terms of information
technology and safety 2) a robust certification process.
The manufacturer prepares the audit by providing
the following content to the auditing authority:
AI-based system which is modularised into
(AI-based) components
functional description of each component
input and output description of each component
The above-mentioned content is provided by the
manufacturer as part of the audit. Both technical
descriptions are based on the concepts of Input-
Process-Output (IPO) patterns and having a well-
defined Operational Design Domain (ODD)
(cf. Chapters 4.1 and 4.2)
4.1 Concept of a Verification Guideline
The first process stage Preliminary Audit consists of
four processes. Initially, it must be verified that the
present system, conform the criteria of being an
AI-based system. This implies that, on one hand, there
is a clear definition of AI in the audit context, and on
the other hand, that at least one component of the
system conforms this definition. Next, the auditing
authority has to make sure, that the present system is
sufficiently modularised into components for the
audit. Based on IPO patterns, this is the case when the
description of the present system components
behaviours cover and associate all inputs with their
corresponding outputs. As a result, it is clear to the
auditing authority which system components must be
audited and how each of them functions according to
the IPO pattern. For this purpose, concepts as
suggested by Burmeister et al. can be adduced [13].
Subsequently, the ODD of each system component is
checked for completeness. It is considered complete
when the ODD of each component clearly defines its
boundaries, the range of input and output values and
which input values have been applied during the
development. Even though aiming at automotive
industry, a conceivable framework for the
formalization of ODD is presented by Gyllenhammar
et al. [14]. This framework is extended by Rødseth et
al. specifically towards autonomous ship systems [15].
As a result, for each component, it must be clear
which output is expected for which input. In the last
process step of the Preliminary Audit stage, the
provided audit metrics and success criteria are
checked. The provided audit metrics must enable the
auditing authority to measure the functioning of each
component based on how the output values meet the
expectations given for test input values.
Corresponding success criteria enable the evaluation
of success by indicating how close the output values
meet the actual expectations. A component is
functioning properly when it complies with the
success criteria. As mentioned before the proposed
framework follows a model-agnostic approach.
Applied to the Preliminary Audit this means, that the
auditing authority must be enabled to answer
whether” a present system is functioning properly
and not howit internally does.
Figure 3. Concept of the Verification Guideline based on
three process stages: Preliminary Audit, Main Audit and Re-
Audit.
When the audit framework has been checked
successively in Preliminary Audit, the Main Audit
follows (cf. Figure 3). In the first process step, the
auditing authority has to make sure that the AI-based
system complies with applicable regulations for
example acts, such as the forthcoming AI Act [5], or
norms, such as the National Marine Electronics
Association (NMEA) 0183 standard [16]. In the
subsequent process step, the auditing authority
procures test data for the audit. Due to the technical
description of the input values description which is
delivered by the manufacturer (cf. Chapter 4) the
auditing authority should be able to procure
appropriate data. Data procurement can be based on
the acquisition or recording of real data, augmenting
existing data or generating synthetic data. It is
important to note that it must be ensured not to use
test
data which has been already used by the
manufacturer during the development, i.e. training, of
the model. Otherwise, results will be distorted for the
benefit of the AI-based system. A conceivable solution
to this problem may be situational synthetic data
generation [17], [18], [19]. These approaches are also
being progressively developed for image data as it is
known to the broad public in case of DALL-E 2 [20]
and Stable Diffusion [21]. When an appropriate dataset
589
is available, the actual audit process follows. Now,
data can be applied to the AI-based system
components. Due to the technical description of the
functioning of the system components (cf. Chapter 4)
the auditing authority can measure and evaluate the
system behaviour by applying corresponding audit
metrics and success criteria. When this audit is
passed, the last process step of the Main Audit stage
follows. As concluding step, the auditing authority
must define criteria which define the necessity of a re-
audit. Typical criteria can be intrinsically motivated,
e.g. because of a software update of a system
component, or extrinsically motivated, e.g. due to
environmental changes in the ODD of a system
component.
After the Main Audit stage, when passed, the
testing and verification of the present AI-based
system is finished. However, as defined in the
necessity criteria, a re-audit can be prompted on
event- or time interval-basis. If so, the scope of this re-
audit must be defined. The re-audit can be narrowed
to specific components of a system, reducing the need
to re-evaluate the whole system and only re-certify
changed components.
4.2 Concept of a Safety Guideline
The first process stage Formalisation consists of three
process steps. In the beginning, the manufacturer
must make sure that the present AI-based system is
sufficiently modularised. This prerequisite is
explained in Chapter 4.1. The manufacturer can lay
the foundation of the modularisation early in the life
cycle of the system. The advantage is that the audit
can be performed module-wise, thus subsequent
improvement must not affect the entire system,
necessarily, but only specific modules. Then, each
module must be given a dedicated ODD. Methods for
formalizing an ODD are mentioned in [15] using the
operational envelope, also techniques described in
[14] can be utilized to create a uniform domain
description. In the final step of the process stage
Formalisation, the manufacturer specifies audit metrics
and success criteria by which the functioning of each
module is measured and evaluated in the audit. The
methodology of a metric for a module strongly
depends on the purpose of the given module.
Examples, such as the confusion matrix in case of
classification problems, are given in [22]. The success
criteria on how well the modules perform are
expressed by the given metrics. By applying metrics
and criteria which are commonly used or required per
standard the manufacturer facilitates comparability of
the present system.
In the subsequent process stage Regulations the
manufacturer is advised to examine if his AI-based
system is affected by prevailing or imminent
regulations. These can be standards such as the
NMEA 0183 standard [16], acts like the imminent
AI Act [5] or other regulatory decisions.
In the last process stage Data & Model the
manufacturer provides a description of the used data
and the functioning of the present system. This shall
benefit a well-performing system and thus a positive
verification. Data quality is the first part since it plays
a significant role by reflecting a model’s potential
experience and knowledge through the range of
scenarios on which it is tested and developed. The
variety of manifestations of bad data quality in
general and in ML-based systems, specifically, as well
as its assessment, is described in [23]. Subsequently,
the applied dataset must be described module-wise
by the manufacturer. This is an obligatory step in the
preparation for the audit. Based on the dataset
description, the auditing authority should be able to
procure suitable test data. Therefore, the dataset
description must, on the one hand, describe the
comprising data, e.g. its values and value ranges, and
on the other hand, the statistics of the dataset, e.g. the
distribution of certain values applying descriptive
statistics [24]. In case of publicly available or
acquirable datasets, the manufacturer must report this
additionally so that the auditing authority is aware of
which datasets not to use. Finally, in the last process
step, IPO pattern, another obligatory step in
preparation for the audit takes place. The
manufacturer needs to describe the expected
functioning of each module from the AI-based system.
To do so the manufacturer shall make use of
descriptions based on IPO patterns. More precisely,
the functioning of each module is described by
specifying output values to their input values.
In the subsequent chapter, recommendations for
actions are derived from the proposed framework.
These recommendations aim at facilitating the
implementation of the process steps proposed in this
framework.
5 RECOMMENDATIONS FOR ACTIONS
The following recommendations for actions were
derived during the development of the framework for
functional verification as outlined from the full-text
study VerifAI. These recommendations serve as a basis
to establish a basis for the collaboration between
regulating bodies, to standardise the audit procedures
for MASS and identify future work.
5.1 Location of testing and verification processes in a
separate module K
In consideration of the existing publications of the
European Commission as well as the procedures in
the MED, it is recommended to introduce a separate
module into the existing testing and verification
processes. It is obvious to locate the testing and
verification processes for AI-based systems presented
in this separate audit module. This allows the
proposed processes (cf. Chapter 4) to be integrated
without having to adapt existing modules.
5.2 Standardisation of the exchange of information in
AI-based systems
It is recommended to promote the standardisation of
information exchange and data sources for the testing
and verification of AI-based systems. Standards can
significantly simplify and scale the testing and
verification processes. Exemplary standard
procedures can be seen in [25].
590
5.3 Introduction of a model-agnostic testing process
In order to be able to test the manifold of AI-based
systems in a uniform manner and to ensure the future
viability of the testing process, the establishment of a
model-agnostic testing process is suggested (cf.
Chapter 4.1). The focus of the test should be to
determine whether” and not how” an AI-based
system functions properly. This approach enables the
feasibility, scalability and comparability of the audit
processes.
5.4 Formalization of the operational design domains of AI-
based systems
To enable uniform testing, it is advised to rely on a
standardised formalization of both the description of
the operation domain as well as of the measurement
and evaluation of functional performance (cf. [14],
[15], [13]).
5.5 Development of an automated data processing
infrastructure
The technical realisation of the testing processes
should be based on an automatable data processing
infrastructure to ensure scalability and
reproducibility. For the data procurement process, it
is fundamental to rely on standardised operation
domain descriptions of present systems (cf. [14], [15],
[13]). Notably, the use of synthetic or augmented data
is a promising way to independently obtain the
necessary test data at any time without building up
long-term data dumps (cf. [17], [18], [19], [20], [21]). A
crucial advantage in using synthetic (or augmented)
test data is the generation of novel test data which
was not used by the manufacturer before.
6 CONCLUSION
Current regulatory procedures are inadequate to
assess maritime AI-based systems (therefore referring
to MASS) as shown in Chapter 2. New processes have
to allow systems with a wide variety of architectures
to be tested, verified and brought to market in a safe
manner. It is therefore needed to introduce concepts
which can be implemented parallel to existing
procedures and measures without interfering with
innovation or safety. The authors, therefore, propose
the introduction of a new Module in the framework of
the MED labelled Module K consisting of guidelines
for the manufacturer of an AI-based system and the
regulating body responsible for verifying, testing and
approving such a system. The guidelines include
steps which should be performed to address concerns
arising from bringing these systems on the market
whilst keeping the amount of required in-depth
knowledge about their internal functions to a
minimum, essentially allowing for a black box testing
procedure. The proposed methods are a basic outline
of how such a methodology could be implemented to
allow the verification of MASS. These methods can
serve as a guideline to specify future research and
narrow down the fields which must be investigated
further.
7 FUTURE WORK
Despite the given possibilities for modelling complex
dynamics and correlations with the help of large
amounts of data, the application of AI with ML
methods, especially through deep learning, is
problematic. The quality and reliability of the
decision-making processes and consequent results of
given models are directly dependent on the selection
of the algorithms and quality of datasets.
Furthermore, the range of available datasets for
testing the models is severely limited, making it
difficult to generalise and solve a problem using ML
methods. One approach to address that issue is to
establish methods and processes in the development
phase of safety-critical applications to maintain safety
and robustness after deployment. Processes and
methods from other areas, e.g. for computer vision
applications, could be adapted by transferring
findings to the maritime domain. Another important
aspect is how to define and justify methods, processes
and requirements for datasets and their procurement,
since they are crucial for the development of robust
systems based on AI, more specifically ML.
ACKNOWLEDGEMENT
The study which is summarized in this paper was carried
out within the experts network ”Wissen - nnen -
Handeln” of the Federal Ministry of Digital Affairs and
Transport Germany (BMDV) and funded by the BMDV
under grant number 0800Z12-1114/002/1061.
REFERENCES
[1] S. K. Brooks and N. Greenberg, “Mental Health and
Psychological Wellbeing of Maritime Personnel: A
Systematic Review,” BMC Psychology, vol. 10, no. 1, pp.
1–26, 2022.
[2] C. Berghoff, B. Biggio, E. Brummel, V. Danos, T. Doms,
H. Ehrich, T. Gantevoort, B. Hammer, J. Iden, S. Jacob,
H. Khlaaf, L. Komrowski, R. Kröwing, J. H. Metzen, M.
Neu, F. Petsch, M. Poretschkin, W. Samek, H. Schäbe, A.
V. Twickel, M. Vechev, T. Wiegand, W. Samek, and M.
Fliehe, “Towards Auditable AI Systems,” Whitepaper,
2021.
[3] W. Samek and K.-R. Müller, Towards Explainable
Artificial Intelligence,” in Lecture Notes in Computer
Science (Including Subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics), 2019,
vol. 11700 LNCS, pp. 522.
[4] Europäisches Parlament und Rat der Europäischen
Union, “Richtlinie 2014/90/EU des europäischen
Parlaments und des Rates vom 23. Juli 2014 über
Schiffsausrüstung und zur Aufhebung der Richtlinie
96/98/EG des Rates (2014/90/EU),” pp. 146185, 2014.
[5] E. Kommission, “Vorschlag für eine Verordnung des
Europäischen Parlaments und des Rates zur Festlegung
harmonisierter Vorschriften für nstliche Intelligenz
(Gesetz über künstliche Intelligenz) und zur Änderung
bestimmter Rechtsakte der Union,” 2021.
[6] B. Rokseth, O. I. Haugen, and I. B. Utne, “Safety
Verification for Autonomous Ships,” MATEC Web of
Conferences, vol. 273, 2019.
[7] H. Ringbom, “Regulating Autonomous Ships
Concepts, Challenges and Precedents,” Ocean
591
Development & International Law, vol. 50, no. 2-3, pp.
141169, 2019.
[8] IMO, “Maritime safety committee (MSC 105),”
https://www.imo.org/en/MediaCentre/MeetingSummari
es/Pages/MSC-105th-session.aspx, 2022.
[9] International Maritime Organization, “International
Convention for the Safety of Life at Sea,” 1974.
[10]Google Patents on (’Autonomous’ AND ’Ship’),
https://patents.google.com/.
[11] IMO, “Resolution MSC.192(79), Adoption of the
Revised Performance Standards for Radar Equipment,”
International Maritime Organization, Tech. Rep., 2004.
[12] ——, “Resolution A.1106(29), Revised Guidelines for
the Onboard Operational Use of Shipborne Automatic
Identification Systems (AIS),” International Maritime
Organization, Tech. Rep., 2015.
[13] H.-C. Burmeister, M. Constapel, C. Uge´, and C. Jahn,
“From Sensors to MASS: Digital Representation of the
Perceived Environment Enabling Ship Navigation,” ser.
IOP Conference Series: Materials Science and
Engineering, vol. 929. IOP Publishing, 2020.
[14] M. Gyllenhammar, R. Johansson, F. Warg, D. Chen, H.-
M. Heyn, M. Sanfridson, J. derberg, A. Thorsen, S.
Ursing, Z. Ab, and M. G. Com, “Towards an Operational
Design Domain that Supports the Safety Argumentation
of an Automated Driving System,” 10th European
Congress on Embedded Real Time Systems, pp. 110,
2020.
[15] Ø. J. Rødseth, L. A. L. Wennersberg, and H. Nordahl,
“Towards Approval of Autonomous Ship Systems by
Their Operational Envelope,” Journal of Marine Science
and Technology, vol. 27, no. 1, pp. S. 6776, 2022.
[16] DIN, “DIN EN 61162-1:2011-09 Navigations- und
Funkkommunikationsgeräte und -systeme für die
Seeschifffahrt - Digitale Schnittstellen - Teil 1: Ein
Datensender und mehrere Datenempfänger,” DIN
Deutsches Institut für Normung e. V., Tech. Rep., 2011.
[17] M. Korakakis, P. Mylonas, and E. Spyrou, “A Short
Survey on Modern Virtual Environments That Utilize AI
and Synthetic Data,” ser. MCIS 2018 Proceedings, 2018,
p. 34.
[18] S. I. Nikolenko, Synthetic Data for Deep Learning.
Springer International Publishing, 2021, vol. 174.
[19] A. Tsirikoglou, J. Kronander, M. Wrenninge, and J.
Unger, “Procedural Modeling and Physically Based
Rendering for Synthetic Data Generation in Automotive
Applications,” 2017.
[20] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M.
Chen, “Hierarchical text-conditional image generation
with clip latents,” ArXiv, vol. abs/2204.06125, 2022.
[21] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B.
Ommer, High-resolution image synthesis with latent
diffusion models,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern
Recognition (CVPR), 2022.
[22] M. Z. Naser and A. H. Alavi, “Error Metrics and
Performance Fitness Indicators for Artificial Intelligence
and Machine Learning in Engineering and Sciences,
Architecture, Structures and Construction, 2021.
[23] V. N. Gudivada, J. Ding, and A. Apon, “Data Quality
Considerations for Big Data and Machine Learning:
Going Beyond Data Cleaning and Transformations,”
International Journal on Advances in Software, vol. 10.1,
pp. 120, 2017.
[24] A. Navlani, A. Fandango, and I. Idris, Python Data
Analysis: Perform Data Collection, Data Processing,
Wrangling, Visualization, and Model Building Using
Python, third edition ed. Birmingham: Packt Publishing,
2021.
[25] DIN, “DIN EN ISO/IEC 23053 Framework for Artificial
Intelligence (AI) Systems Using Machine Learning
(ML),” DIN Deutsches Institut für Normung e. V., Tech.
Rep., 2023.