rfc9817v1.txt   rfc9817.txt 
Internet Research Task Force (IRTF) I. Kunze Internet Research Task Force (IRTF) I. Kunze
Request for Comments: 9817 K. Wehrle Request for Comments: 9817 K. Wehrle
Category: Informational RWTH Aachen Category: Informational RWTH Aachen
ISSN: 2070-1721 D. Trossen ISSN: 2070-1721 D. Trossen
Huawei DaPaDOT Tech
M-J. Montpetit M-J. Montpetit
McGill SLICES-RI
X. de Foy X. de Foy
InterDigital Communications, LLC InterDigital Communications, LLC
D. Griffin D. Griffin
M. Rio M. Rio
UCL UCL
July 2025 July 2025
Use Cases for In-Network Computing Use Cases for In-Network Computing
Abstract Abstract
skipping to change at line 99 skipping to change at line 99
11. Informative References 11. Informative References
Acknowledgements Acknowledgements
Authors' Addresses Authors' Addresses
1. Introduction 1. Introduction
The Internet was designed as a best-effort packet network, forwarding The Internet was designed as a best-effort packet network, forwarding
packets from source to destination with limited guarantees regarding packets from source to destination with limited guarantees regarding
their timely and successful reception. Data manipulation, their timely and successful reception. Data manipulation,
computation, and more complex protocol functionalities are generally computation, and more complex protocol functionalities are generally
provided by the end hosts, while network nodes are traditionally kept provided by the end hosts, while network nodes are commonly kept
simple and only offer a "store and forward" packet facility. This simple and only offer a "store and forward" packet facility. This
simplicity of purpose of the network has shown to be suitable for a simplicity of purpose of the network has shown to be suitable for a
wide variety of applications and has facilitated the rapid growth of wide variety of applications and has facilitated the rapid growth of
the Internet. However, introducing middleboxes with specialized the Internet. However, introducing middleboxes with specialized
functionality for enhancing performance has often led to problems due functionality for enhancing performance has often led to problems due
to their inflexibility. to their inflexibility.
However, with the rise of new services, some of which are described However, with the rise of new services, some of which are described
in this document, there is a growing number of application domains in this document, there is a growing number of application domains
that require more than best-effort forwarding, including strict that require more than best-effort forwarding, including strict
skipping to change at line 159 skipping to change at line 159
is another objective of the use case descriptions. To achieve this, is another objective of the use case descriptions. To achieve this,
the following taxonomy is proposed to describe each of the use cases: the following taxonomy is proposed to describe each of the use cases:
Description: A high-level presentation of the purpose of the use Description: A high-level presentation of the purpose of the use
case and a short explanation of the use case behavior. case and a short explanation of the use case behavior.
Characterization: An explanation of the services that are being Characterization: An explanation of the services that are being
utilized and realized as well as the semantics of interactions in utilized and realized as well as the semantics of interactions in
the use case. the use case.
Existing Solutions: A description of current methods that may Existing solutions: A description of current methods that may
realize the use case (if they exist), though not claiming to realize the use case (if they exist), though not claiming to
exhaustively review the landscape of solutions. exhaustively review the landscape of solutions.
Opportunities: An outline of how COIN capabilities may support or Opportunities: An outline of how COIN capabilities may support or
improve on the use case in terms of performance and other metrics. improve on the use case in terms of performance and other metrics.
Research questions: Essential questions that are suitable for Research questions: Essential questions that are suitable for
guiding research to achieve the identified opportunities. The guiding research to achieve the identified opportunities. The
research questions also capture immediate capabilities for any research questions also capture immediate capabilities for any
COIN solution addressing the particular use case whose development COIN solution addressing the particular use case whose development
skipping to change at line 185 skipping to change at line 185
for any COIN solution addressing the particular use case; we limit for any COIN solution addressing the particular use case; we limit
these capabilities to those directly affecting COIN, recognizing these capabilities to those directly affecting COIN, recognizing
that any use case will realistically require many additional that any use case will realistically require many additional
capabilities for its realization. We omit this dedicated section capabilities for its realization. We omit this dedicated section
if relevant capabilities are already sufficiently covered by the if relevant capabilities are already sufficiently covered by the
corresponding research questions. corresponding research questions.
This document discusses these six aspects along a number of This document discusses these six aspects along a number of
individual use cases to demonstrate the diversity of COIN individual use cases to demonstrate the diversity of COIN
applications. It is intended as a basis for further analyses and applications. It is intended as a basis for further analyses and
discussions within the wider research community. This document discussions within the wider research community. This document has
represents the consensus of COINRG. received review comments at different stages of its development, by
experts within and out of COINRG, as detailed in the Acknowledgements
section. This document represents the consensus of COINRG.
2. Terminology 2. Terminology
This document uses the terminology defined below. This document uses the terminology defined below.
Programmable Network Devices (PNDs): network devices, such as Programmable Network Devices (PNDs): network devices, such as
network interface cards and switches, which are programmable network interface cards and switches, which are programmable
(e.g., using P4 [P4] or other languages). (e.g., using P4 [P4] or other languages).
(COIN) execution environment: a class of target environments for COIN execution environment: a class of target environments for
function execution, for example, an execution environment based on function execution, for example, an execution environment based on
the Java Virtual Machine (JVM) that can run functions represented the Java Virtual Machine (JVM) that can run functions represented
in JVM byte code. in JVM byte code.
COIN system: the PNDs (and end systems) and their execution COIN system: the PNDs (and end systems) and their execution
environments, together with the communication resources environments, together with the communication resources
interconnecting them, operated by a single provider or through interconnecting them, operated by a single provider or through
interactions between multiple providers that jointly offer COIN interactions between multiple providers that jointly offer COIN
capabilities. capabilities.
COIN capability: a feature enabled through the joint processing of COIN capability: a feature enabled through the joint processing of
computation and communication resources in the network. computation and communication resources in the network.
(COIN) program: a monolithic functionality that is provided COIN program: a monolithic functionality that is provided according
according to the specification for said program and which may be to the specification for said program and which may be requested
requested by a user. A composite service can be built by by a user. A composite service can be built by orchestrating a
orchestrating a combination of monolithic COIN programs. combination of monolithic COIN programs.
(COIN) program instance: one running instance of a program. COIN program instance: one running instance of a program.
COIN experience: a new user experience brought about through the COIN experience: a new user experience brought about through the
utilization of COIN capabilities. utilization of COIN capabilities.
3. Providing New COIN Experiences 3. Providing New COIN Experiences
3.1. Mobile Application Offloading 3.1. Mobile Application Offloading
3.1.1. Description 3.1.1. Description
This scenario can be exemplified in an immersive gaming application, This scenario can be exemplified in an immersive gaming application,
where a single user plays a game using a Virtual Reality (VR) where a single user plays a game using a Virtual Reality (VR)
headset. The headset hosts several (COIN) programs. For instance, headset. The headset hosts several COIN programs. For instance, the
the display (COIN) program renders frames to the user, while other display COIN program renders frames to the user, while other programs
programs are realized for VR content processing and to incorporate are realized for VR content processing and to incorporate input data
input data received from sensors (e.g., in bodily worn devices received from sensors (e.g., in bodily worn devices including the VR
including the VR headset). headset).
Once this application is partitioned into its constituent (COIN) Once this application is partitioned into its constituent COIN
programs and deployed throughout a COIN system, utilizing a COIN programs and deployed throughout a COIN system utilizing a COIN
execution environment, only the display (COIN) program may be left in execution environment, only the display COIN program may be left in
the headset, while the compute intensive real-time VR content the headset. Meanwhile, the CPU-intensive real-time VR content
processing (COIN) program can be offloaded to a nearby resource rich processing COIN program can be offloaded to a nearby resource rich
home PC or a Programmable Network Device (PND) in the operator's home PC or a Programmable Network Device (PND) in the operator's
access network, for a better execution (faster and possibly higher access network for a better execution (i.e., faster and possibly
resolution generation). higher resolution generation).
3.1.2. Characterization 3.1.2. Characterization
Partitioning a mobile application into several constituent (COIN) Partitioning a mobile application into several constituent COIN
programs allows for denoting the application as a collection of programs allows for denoting the application as a collection of COIN
(COIN) programs for a flexible composition and a distributed programs for a flexible composition and a distributed execution. In
execution. In our example above, most capabilities of a mobile our example above, most capabilities of a mobile application can be
application can be categorized into any of three groups: receiving, categorized into any of three groups: receiving, processing, and
processing, and displaying. displaying.
Any device may realize one or more of the (COIN) programs of a mobile Any device may realize one or more of the COIN programs of a mobile
application and expose them to the (COIN) system and its constituent application and expose them to the COIN system and its constituent
(COIN) execution environments. When the (COIN) program sequence is COIN execution environments. When the COIN program sequence is
executed on a single device, the outcome is what you traditionally executed on a single device, the outcome is what you commonly see
see with applications running on mobile devices. with applications running on mobile devices.
However, the execution of a (COIN) program may be moved to other However, the execution of a COIN program may be moved to other (e.g.,
(e.g., more suitable) devices, including PNDs, which have exposed the more suitable) devices, including PNDs, which have exposed the
corresponding (COIN) program as individual (COIN) program instances corresponding COIN program as individual COIN program instances to
to the (COIN) system by means of a service identifier. The result is the COIN system by means of a service identifier. The result is the
the equivalent to mobile function offloading, for possible reduction equivalent to mobile function offloading, in that it offers the
of power consumption (e.g., offloading CPU-intensive process possible reduction of power consumption (e.g., offloading CPU-
functions to a remote server) or for improved end-user experience intensive process functions to a remote server) or an improved end-
(e.g., moving display functions to a nearby smart TV) by selecting user experience (e.g., moving display functions to a nearby smart TV)
more suitably placed (COIN) program instances in the overall (COIN) by selecting more suitably placed COIN program instances in the
system. overall COIN system.
We can already see a trend toward supporting such functionality with We can already see a trend toward supporting such functionality that
relyiccng on dedicated cloud hardware (e.g., gaming platforms relies on dedicated cloud hardware (e.g., gaming platforms rendering
rendering content externally). We envision, however, that such content externally). We envision, however, that such functionality
functionality is becoming more pervasive through specific facilities, is becoming more pervasive through specific facilities, such as
such as entertainment parks or even hotels, in order to deploy needed entertainment parks or even hotels, in order to deploy needed edge
edge computing capabilities to enable localized gaming as well as computing capabilities to enable localized gaming as well as non-
non-gaming scenarios. gaming scenarios.
Figure 1 shows one realization of the above scenario, where a "DPR Figure 1 shows one realization of the above scenario, where a "DPR
app" is running on a mobile device (containing the partitioned COIN app" is running on a mobile device (containing the partitioned COIN
programs Display (D), Process (P) and Receive (R)) over a programs Display (D), Process (P) and Receive (R)) over a
programmable switching network, e.g., a Software-Defined Network programmable switching network, e.g., a Software-Defined Network
(SDN) here. The packaged applications are made available through a (SDN) here. The packaged applications are made available through a
localized "playstore server". The mobile application installation is localized "playstore server". The mobile application installation is
realized as a service deployment process, combining the local app realized as a service deployment process, combining the local app
installation with a distributed deployment (and orchestration) of one installation with a distributed deployment (and orchestration) of one
or more (COIN) programs on most suitable end systems or PNDs (here, a or more COIN programs on most suitable end systems or PNDs (here, a
"processing server"). "processing server").
+----------+ Processing Server +----------+ Processing Server
Mobile | +------+ | Mobile | +------+ |
+---------+ | | P | | +---------+ | | P | |
| App | | +------+ | | App | | +------+ |
| +-----+ | | +------+ | | +-----+ | | +------+ |
| |D|P|R| | | | SR | | | |D|P|R| | | | SR | |
| +-----+ | | +------+ | Internet | +-----+ | | +------+ | Internet
| +-----+ | +----------+ / | +-----+ | +----------+ /
skipping to change at line 319 skipping to change at line 321
|+-------+| /+--+ |+-------+| /+--+
|| SR || +---------+ || SR || +---------+
|+-------+| |Playstore| |+-------+| |Playstore|
+---------+ | Server | +---------+ | Server |
TV +---------+ TV +---------+
Figure 1: Application Function Offloading Example Figure 1: Application Function Offloading Example
Such localized deployment could, for instance, be provided by a Such localized deployment could, for instance, be provided by a
visiting site, such as a hotel or a theme park. Once the processing visiting site, such as a hotel or a theme park. Once the processing
(COIN) program is terminated on the mobile device, the "service COIN program is terminated on the mobile device, the "service routing
routing (SR)" elements in the network route (service) requests (SR)" elements in the network route (service) requests instead to the
instead to the (previously deployed) processing (COIN) program (previously deployed) processing COIN program running on the
running on the processing server over an existing SDN network. Here, processing server over an existing SDN network. Here, capabilities
capabilities and other constraints for selecting the appropriate and other constraints for selecting the appropriate COIN program, in
(COIN) program, in case of having deployed more than one, may be case of having deployed more than one, may be provided both in the
provided both in the advertisement of the (COIN) program and the advertisement of the COIN program and the service request itself.
service request itself.
As an extension to the above scenarios, we can also envision that As an extension to the above scenarios, we can also envision that
content from one processing (COIN) program may be distributed to more content from one processing COIN program may be distributed to more
than one display (COIN) program (e.g., for multi- and many-viewing than one display COIN program (e.g., for multi- and many-viewing
scenarios). Here, an offloaded processing program may collate input scenarios). Here, an offloaded processing program may collate input
from several users in the (virtual) environment to generate a from several users in the (virtual) environment to generate a
possibly three-dimensional render that is then distributed via a possibly three-dimensional render that is then distributed via a
service-level multicast capability towards more than one display service-level multicast capability towards more than one display COIN
(COIN) program. program.
3.1.3. Existing Solutions 3.1.3. Existing Solutions
The ETSI Mobile Edge Computing (MEC) [ETSI] suite of technologies The ETSI Multi-access Edge Computing (MEC) [ETSI] suite of
provides solutions for mobile function offloading by allowing mobile technologies provides solutions for mobile function offloading by
applications to select resources in edge devices to execute functions allowing mobile applications to select resources in edge devices to
instead of the mobile device directly. For this, ETSI MEC utilizes a execute functions instead of the mobile device directly. For this,
set of interfaces for the selection of suitable edge resources, ETSI MEC utilizes a set of interfaces for the selection of suitable
connecting to so-called MEC application servers, while also allowing edge resources, connecting to so-called MEC application servers,
for sending data for function execution to the application server. while also allowing for sending data for function execution to the
application server.
However, the technologies do not utilize microservices However, the technologies do not utilize microservices
[Microservices]; they mainly rely on virtualization approaches such [Microservices]; they mainly rely on virtualization approaches such
as containers or virtual machines, thus requiring a heavier as containers or virtual machines, thus requiring a heavier
processing and memory footprint in a COIN execution environment and processing and memory footprint in a COIN execution environment and
the executing intermediaries. Also, the ETSI work does not allow for the executing intermediaries. Also, the ETSI work does not allow for
the dynamic selection and redirection of (COIN) program calls to the dynamic selection and redirection of COIN program calls to
varying edge resources rather than a single MEC application server. varying edge resources; it does allow for them to a single MEC
application server.
Also, the selection of the edge resource (the app server) is Also, the selection of the edge resource (the app server) is
relatively static, relying on DNS-based endpoint selection, which relatively static, relying on DNS-based endpoint selection, which
does not cater to the requirements of the example provided above, does not cater to the requirements of the example provided above,
where the latency for redirecting to another device lies within a few where the latency for redirecting to another device lies within a few
milliseconds for aligning with the frame rate of the display milliseconds for aligning with the frame rate of the display
microservice. microservice.
Lastly, MEC application servers are usually considered resources Lastly, MEC application servers are usually considered resources
provided by the network operator through its MEC infrastructure, provided by the network operator through its MEC infrastructure,
skipping to change at line 383 skipping to change at line 386
case, some of which have been realized in an Android-based case, some of which have been realized in an Android-based
realization of the microservices as a single application, which is realization of the microservices as a single application, which is
capable of dynamically redirecting traffic to other microservice capable of dynamically redirecting traffic to other microservice
instances in the network. This capability, together with the instances in the network. This capability, together with the
underlying path-based forwarding capability (using SDN), was underlying path-based forwarding capability (using SDN), was
demonstrated publicly (e.g., at the Mobile World Congress 2018 and demonstrated publicly (e.g., at the Mobile World Congress 2018 and
2019). 2019).
3.1.4. Opportunities 3.1.4. Opportunities
* The packaging of (COIN) programs into existing mobile application * The packaging of COIN programs into existing mobile application
packaging may enable the migration from current (mobile) device- packaging may enable the migration from current (mobile) device-
centric execution of those mobile applications toward a possible centric execution of those mobile applications toward a possible
distributed execution of the constituent (COIN) programs that are distributed execution of the constituent COIN programs that are
part of the overall mobile application. part of the overall mobile application.
* The orchestration for deploying (COIN) program instances in * The orchestration for deploying COIN program instances in specific
specific end systems and PNDs alike may open up the possibility end systems and PNDs alike may open up the possibility for
for localized infrastructure owners, such as hotels or venue localized infrastructure owners, such as hotels or venue owners,
owners, to offer their compute capabilities to their visitors for to offer their compute capabilities to their visitors for improved
improved or even site-specific experiences. or even site-specific experiences.
* The execution of (current mobile) app-level (COIN) programs may * The execution of (current mobile) app-level COIN programs may
speed up the execution of said (COIN) program by relocating the speed up the execution of said COIN program by relocating the
execution to more suitable devices, including PNDs that may reside execution to more suitable devices, including PNDs that may reside
better located in relation to other (COIN) programs and thus better located in relation to other COIN programs and thus improve
improve performance, such as latency. performance, such as latency.
* The support for service-level routing of requests (such as service * The support for service-level routing of requests (such as service
routing in [APPCENTRES]) may support higher flexibility when routing in [APPCENTRES]) may support higher flexibility when
switching from one (COIN) program instance to another (e.g., due switching from one COIN program instance to another (e.g., due to
to changing constraints for selecting the new (COIN) program changing constraints for selecting the new COIN program instance).
instance). Here, PNDs may support service routing solutions by Here, PNDs may support service routing solutions by acting as
acting as routing overlay nodes to implement the necessary routing overlay nodes to implement the necessary additional lookup
additional lookup functionality and also possibly support the functionality and also possibly support the handling of affinity
handling of affinity relations (i.e., the forwarding of one packet relations (i.e., the forwarding of one packet to the destination
to the destination of a previous one due to a higher level service of a previous one due to a higher level service relation as
relation as discussed and described in [SarNet2021]). discussed and described in [SarNet2021]).
* The ability to identify service-level COIN elements will allow for * The ability to identify service-level COIN elements will allow for
routing service requests to those COIN elements, including PNDs, routing service requests to those COIN elements, including PNDs,
therefore possibly allowing for a new COIN functionality to be therefore possibly allowing for a new COIN functionality to be
included in the mobile application. included in the mobile application.
* The support for constraint-based selection of a specific (COIN) * The support for constraint-based selection of a specific COIN
program instance over others (e.g., constraint-based routing in program instance over others (e.g., constraint-based routing in
[APPCENTRES], showcased for PNDs in [SarNet2021]) may allow for a [APPCENTRES], showcased for PNDs in [SarNet2021]) may allow for a
more flexible and app-specific selection of (COIN) program more flexible and app-specific selection of COIN program
instances, thereby allowing for better meeting the app-specific instances, thereby allowing for better meeting the app-specific
and end-user requirements. and end-user requirements.
3.1.5. Research Questions 3.1.5. Research Questions
* RQ 3.1.1: How to combine service-level orchestration frameworks, * RQ 3.1.1: How to combine service-level orchestration frameworks,
such as TOSCA orchestration templates [TOSCA], with app-level such as TOSCA orchestration templates [TOSCA], with app-level
(e.g., mobile application) packaging methods, ultimately providing (e.g., mobile application) packaging methods, ultimately providing
the means for packaging microservices for deployments in the means for packaging microservices for deployments in
distributed networked computing environments? distributed networked computing environments?
* RQ 3.1.2: How to reduce latencies involved in (COIN) program * RQ 3.1.2: How to reduce latencies involved in COIN program
interactions where (COIN) program instance locations may change interactions where COIN program instance locations may change
quickly? Can service-level requests be routed directly through quickly? Can service-level requests be routed directly through
in-band signaling methods instead of relying on out-of-band in-band signaling methods instead of relying on out-of-band
discovery, such as through the DNS? discovery, such as through the DNS?
* RQ 3.1.3: How to signal constraints used for routing requests * RQ 3.1.3: How to signal constraints used for routing requests
towards (COIN) program instances in a scalable manner (i.e., for towards COIN program instances in a scalable manner (i.e., for
dynamically choosing the best possible service sequence of one or dynamically choosing the best possible service sequence of one or
more (COIN) programs for a given application experience through more COIN programs for a given application experience through
chaining (COIN) program executions)? chaining COIN program executions)?
* RQ 3.1.4: How to identify (COIN) programs and program instances so * RQ 3.1.4: How to identify COIN programs and program instances so
as to allow routing (service) requests to specific instances of a as to allow routing (service) requests to specific instances of a
given service? given service?
* RQ 3.1.5: How to identify a specific choice of a (COIN) program * RQ 3.1.5: How to identify a specific choice of a COIN program
instance over others, thus allowing pinning the execution of a instance over others, thus allowing pinning the execution of a
service of a specific (COIN) program to a specific resource (i.e., service of a specific COIN program to a specific resource (i.e., a
a (COIN) program instance in the distributed environment)? COIN program instance in the distributed environment)?
* RQ 3.1.6: How to provide affinity of service requests towards * RQ 3.1.6: How to provide affinity of service requests towards COIN
(COIN) program instances (i.e., longer-term transactions with program instances (i.e., longer-term transactions with ephemeral
ephemeral state established at a specific (COIN) program state established at a specific COIN program instance)?
instance)?
* RQ 3.1.7: How to provide constraint-based routing decisions that * RQ 3.1.7: How to provide constraint-based routing decisions that
can be realized at packet forwarding speed (e.g., using techniques can be realized at packet forwarding speed (e.g., using techniques
explored in [SarNet2021] at the forwarding plane or using explored in [SarNet2021] at the forwarding plane or using
approaches like [Multi2020] for extended routing protocols)? approaches like [Multi2020] for extended routing protocols)?
* RQ 3.1.8: What COIN capabilities may support the execution of * RQ 3.1.8: What COIN capabilities may support the execution of COIN
(COIN) programs and their instances? programs and their instances?
* RQ 3.1.9: How to ensure real-time synchronization and consistency * RQ 3.1.9: How to ensure real-time synchronization and consistency
of distributed application states among (COIN) program instances, of distributed application states among COIN program instances, in
in particular, when frequently changing the choice for a particular, when frequently changing the choice for a particular
particular (COIN) program in terms of executing a service COIN program in terms of executing a service instance?
instance?
3.2. Extended Reality and Immersive Media 3.2. Extended Reality and Immersive Media
3.2.1. Description 3.2.1. Description
Extended Reality (XR) encompasses VR, Augmented Reality (AR) and Extended Reality (XR) encompasses VR, Augmented Reality (AR) and
Mixed Reality (MR). It provides the basis for the metaverse and is Mixed Reality (MR). It provides the basis for the metaverse and is
the driver of a number of advances in interactive technologies. the driver of a number of advances in interactive technologies.
While initially associated with gaming and immersive entertainment, While initially associated with gaming and immersive entertainment,
applications now include remote diagnosis, maintenance, telemedicine, applications now include remote diagnosis, maintenance, telemedicine,
skipping to change at line 585 skipping to change at line 586
the purpose of this document, it is important to note that the use of the purpose of this document, it is important to note that the use of
COIN for XR does not imply a specific protocol but targets an COIN for XR does not imply a specific protocol but targets an
architecture enabling the deployment of the services. In this architecture enabling the deployment of the services. In this
context, similar considerations as for Section 3.1 apply. context, similar considerations as for Section 3.1 apply.
3.2.3. Existing Solutions 3.2.3. Existing Solutions
The XR field has profited from extensive research in the past years The XR field has profited from extensive research in the past years
in gaming, machine learning, network telemetry, high resolution in gaming, machine learning, network telemetry, high resolution
imaging, smart cities, and the Internet of Things (IoT). imaging, smart cities, and the Internet of Things (IoT).
Information-centric networking (and related) approaches that combine, Information-Centric Networking (ICN) (and related) approaches that
publish, subscribe, and distribute storage are also very suited for combine, publish, subscribe, and distribute storage are also very
the multisource-multidestination applications of XR. New AR and VR suited for the multisource-multidestination applications of XR. New
headsets and glasses have continued to evolve towards autonomy with AR and VR headsets and glasses have continued to evolve towards
local computation capabilities, increasingly performing much of the autonomy with local computation capabilities, increasingly performing
processing that is needed to render and augment the local images. much of the processing that is needed to render and augment the local
Mechanisms aimed at enhancing the computational and storage images. Mechanisms aimed at enhancing the computational and storage
capacities of mobile devices could also improve XR capabilities as capacities of mobile devices could also improve XR capabilities as
they include the discovery of available servers within the they include the discovery of available servers within the
environment and using them opportunistically to enhance the environment and using them opportunistically to enhance the
performance of interactive applications and distributed file systems. performance of interactive applications and distributed file systems.
While there is still no specific COIN research in AR and VR, the need While there is still no specific COIN research in AR and VR, the need
for network support is important to offload some of the computations for network support is important to offload some of the computations
related to movement, multiuser interactions, and networked related to movement, multiuser interactions, and networked
applications, notably in gaming but also in health [NetworkedVR]. applications, notably in gaming but also in health [NetworkedVR].
This new approach to networked AR and VR is exemplified in [eCAR] by This new approach to networked AR and VR is exemplified in [eCAR] by
skipping to change at line 730 skipping to change at line 731
performers receive live feedback from the audience, which may also be performers receive live feedback from the audience, which may also be
conveyed to other audience members. conveyed to other audience members.
There are two main aspects: There are two main aspects:
i. to emulate as closely as possible the experience of live i. to emulate as closely as possible the experience of live
performances where the performers, audience, director, and performances where the performers, audience, director, and
technicians are co-located in the same physical space, such as a technicians are co-located in the same physical space, such as a
theater; and theater; and
ii. to enhance traditional physical performances with features such ii. to enhance conventional physical performances with features such
as personalization of the experience according to the as personalization of the experience according to the
preferences or needs of the performers, directors, and audience preferences or needs of the performers, directors, and audience
members. members.
Examples of personalization include: Examples of personalization include:
* Viewpoint selection, such as choosing a specific seat in the * Viewpoint selection, such as choosing a specific seat in the
theater or for more advanced positioning of the audience member's theater or for more advanced positioning of the audience member's
viewpoint outside of the traditional seating (i.e., amongst, viewpoint outside of the conventional seating (i.e., amongst,
above, or behind the performers, but within some limits that may above, or behind the performers, but within some limits that may
be imposed by the performers or the director for artistic be imposed by the performers or the director for artistic
reasons); reasons);
* Augmentation of the performance with subtitles, audio description, * Augmentation of the performance with subtitles, audio description,
actor tagging, language translation, advertisements and product actor tagging, language translation, advertisements and product
placement, and other enhancements and filters to make the placement, and other enhancements and filters to make the
performance accessible to audience members who are disabled (e.g., performance accessible to audience members who are disabled (e.g.,
the removal of flashing images for audience members who have the removal of flashing images for audience members who have
epilepsy or alternative color schemes for those who have color epilepsy or alternative color schemes for those who have color
blindness). blindness).
3.3.2. Characterization 3.3.2. Characterization
There are several chained functional entities that are candidates for There are several chained functional entities that are candidates for
being deployed as (COIN) programs: being deployed as COIN programs:
* Performer aggregation and editing functions * Performer aggregation and editing functions
* Distribution and encoding functions * Distribution and encoding functions
* Personalization functions * Personalization functions
- to select which of the existing streams should be forwarded to - to select which of the existing streams should be forwarded to
the audience member, remote performer, or member of the the audience member, remote performer, or member of the
production team production team
skipping to change at line 786 skipping to change at line 787
audience head position) when this processing has been offloaded audience head position) when this processing has been offloaded
from the viewer's end system to the COIN function due to from the viewer's end system to the COIN function due to
limited processing power in the end system or due to limited limited processing power in the end system or due to limited
network bandwidth to receive all of the individual streams to network bandwidth to receive all of the individual streams to
be processed. be processed.
* Audience feedback sensor processing functions * Audience feedback sensor processing functions
* Audience feedback aggregation functions * Audience feedback aggregation functions
These are candidates for deployment as (COIN) programs in PNDs rather These are candidates for deployment as COIN programs in PNDs rather
than being located in end systems (at the performers' site, the than being located in end systems (at the performers' site, the
audience members' premises, or in a central cloud location) for audience members' premises, or in a central cloud location) for
several reasons: several reasons:
* Personalization of the performance according to viewer preferences * Personalization of the performance according to viewer preferences
and requirements makes it infeasible to be done in a centralized and requirements makes it infeasible to be done in a centralized
manner at the performer premises: the computational resources and manner at the performer premises: the computational resources and
network bandwidth would need to scale with the number of network bandwidth would need to scale with the number of
personalized streams. personalized streams.
skipping to change at line 821 skipping to change at line 822
processing capabilities at centralized data centers. processing capabilities at centralized data centers.
3.3.3. Existing Solutions 3.3.3. Existing Solutions
Note: Existing solutions for some aspects of this use case are Note: Existing solutions for some aspects of this use case are
covered in Section 3.1, Section 3.2, and Section 5.1. covered in Section 3.1, Section 3.2, and Section 5.1.
3.3.4. Opportunities 3.3.4. Opportunities
* Executing media processing and personalization functions on-path * Executing media processing and personalization functions on-path
as (COIN) programs in PNDs can avoid detour/stretch to central as COIN programs in PNDs can avoid detour/stretch to central
servers, thus reducing latency and bandwidth consumption. For servers, thus reducing latency and bandwidth consumption. For
example, the overall delay for performance capture, aggregation, example, the overall delay for performance capture, aggregation,
distribution, personalization, consumption, capture of audience distribution, personalization, consumption, capture of audience
response, feedback processing, aggregation, and rendering should response, feedback processing, aggregation, and rendering should
be achieved within an upper bound of latency (the tolerable amount be achieved within an upper bound of latency (the tolerable amount
is to be defined, but in the order of 100s of ms to mimic is to be defined, but in the order of 100s of ms to mimic
performers perceiving audience feedback, such as laughter or other performers perceiving audience feedback, such as laughter or other
emotional responses in a theater setting). emotional responses in a theater setting).
* Processing of media streams allows (COIN) programs, PNDs, and the * Processing of media streams allows COIN programs, PNDs, and the
wider (COIN) system/environment to be contextually aware of flows wider COIN system/environment to be contextually aware of flows
and their requirements, which can be used for determining network and their requirements, which can be used for determining network
treatment of the flows (e.g., path selection, prioritization, treatment of the flows (e.g., path selection, prioritization,
multiflow coordination, synchronization, and resilience). multiflow coordination, synchronization, and resilience).
3.3.5. Research Questions 3.3.5. Research Questions
* RQ 3.3.1: In which PNDs should (COIN) programs for aggregation, * RQ 3.3.1: In which PNDs should COIN programs for aggregation,
encoding, and personalization functions be located? Close to the encoding, and personalization functions be located? Close to the
performers or close to the viewers? performers or close to the viewers?
* RQ 3.3.2: How far from the direct network path from performer to * RQ 3.3.2: How far from the direct network path from performer to
viewer should (COIN) programs be located, considering the latency viewer should COIN programs be located, considering the latency
implications of path-stretch and the availability of processing implications of path-stretch and the availability of processing
capacity at PNDs? How should tolerances be defined by users? capacity at PNDs? How should tolerances be defined by users?
* RQ 3.3.3: Should users decide which PNDs should be used for * RQ 3.3.3: Should users decide which PNDs should be used for
executing (COIN) programs for their flows, or should they express executing COIN programs for their flows, or should they express
requirements and constraints that will direct decisions by the requirements and constraints that will direct decisions by the
orchestrator/manager of a COIN system? In case of the latter, how orchestrator/manager of a COIN system? In case of the latter, how
can users specify requirements on network and processing metrics can users specify requirements on network and processing metrics
(such as latency and throughput bounds)? (such as latency and throughput bounds)?
* RQ 3.3.4: How to achieve synchronization across multiple streams * RQ 3.3.4: How to achieve synchronization across multiple streams
to allow for merging, audio-video interpolation, and other cross- to allow for merging, audio-video interpolation, and other cross-
stream processing functions that require time synchronization for stream processing functions that require time synchronization for
the integrity of the output? How can this be achieved considering the integrity of the output? How can this be achieved considering
that synchronization may be required between flows that are: that synchronization may be required between flows that are:
skipping to change at line 879 skipping to change at line 880
This RQ raises issues associated with synchronization across This RQ raises issues associated with synchronization across
multiple media streams and substreams [RFC7272] as well as time multiple media streams and substreams [RFC7272] as well as time
synchronization between PNDs/routers on multiple paths [RFC8039]. synchronization between PNDs/routers on multiple paths [RFC8039].
* RQ 3.3.5: Where will COIN programs be executed? In the data plane * RQ 3.3.5: Where will COIN programs be executed? In the data plane
of PNDs, in other on-router computational capabilities within of PNDs, in other on-router computational capabilities within
PNDs, or in adjacent computational nodes? PNDs, or in adjacent computational nodes?
* RQ 3.3.6: Are computationally intensive tasks, such as video * RQ 3.3.6: Are computationally intensive tasks, such as video
stitching or media recognition and annotation (cf. Section 3.2), stitching or media recognition and annotation (cf. Section 3.2),
considered as suitable candidate (COIN) programs or should they be considered as suitable candidate COIN programs or should they be
implemented in end systems? implemented in end systems?
* RQ 3.3.7: If the execution of COIN programs is offloaded to * RQ 3.3.7: If the execution of COIN programs is offloaded to
computational nodes outside of PNDs (e.g., for processing by computational nodes outside of PNDs (e.g., for processing by
GPUs), should this still be considered as COIN? Where is the GPUs), should this still be considered as COIN? Where is the
boundary between COIN capabilities and explicit routing of flows boundary between COIN capabilities and explicit routing of flows
to end systems? to end systems?
3.3.6. Additional Desirable Capabilities 3.3.6. Additional Desirable Capabilities
In addition to the capabilities driven by the research questions In addition to the capabilities driven by the research questions
above, there are a number of other features that solutions in this above, there are a number of other features that solutions in this
space might benefit from. In particular, if users are indeed space might benefit from. In particular, if users are indeed
empowered to specify requirements on network and processing metrics, empowered to specify requirements on network and processing metrics,
one important capability of COIN systems will be to respect these one important capability of COIN systems will be to respect these
user-specified requirements and constraints when routing flows and user-specified requirements and constraints when routing flows and
selecting PNDs for executing (COIN) programs. Similarly, solutions selecting PNDs for executing COIN programs. Similarly, solutions
should be able to synchronize flow treatment and processing across should be able to synchronize flow treatment and processing across
multiple related flows, which may be on disjoint paths, to provide multiple related flows, which may be on disjoint paths, to provide
similar performance to different entities. similar performance to different entities.
4. Supporting New COIN Systems 4. Supporting New COIN Systems
4.1. In-Network Control / Time-Sensitive Applications 4.1. In-Network Control / Time-Sensitive Applications
4.1.1. Description 4.1.1. Description
The control of physical processes and components of industrial The control of physical processes and components of industrial
production lines is essential for the growing automation of production lines is essential for the growing automation of
production and ideally allows for a consistent quality level. production and ideally allows for a consistent quality level.
Traditionally, the control has been exercised by control software Commonly, the control has been exercised by control software running
running on Programmable Logic Controllers (PLCs) located directly on Programmable Logic Controllers (PLCs) located directly next to the
next to the controlled process or component. This approach is best controlled process or component. This approach is best suited for
suited for settings with a simple model that is focused on a single settings with a simple model that is focused on a single or a few
or a few controlled components. controlled components.
Modern production lines and shop floors are characterized by an Modern production lines and shop floors are characterized by an
increasing number of involved devices and sensors, a growing level of increasing number of involved devices and sensors, a growing level of
dependency between the different components, and more complex control dependency between the different components, and more complex control
models. A centralized control is desirable to manage the large models. A centralized control is desirable to manage the large
amount of available information, which often has to be preprocessed amount of available information, which often has to be preprocessed
or aggregated with other information before it can be used. As a or aggregated with other information before it can be used. As a
result, computations are increasingly spatially decoupled and moved result, computations are increasingly spatially decoupled and moved
away from the controlled objects, thus inducing additional latency. away from the controlled objects, thus inducing additional latency.
Instead, moving compute functionality onto COIN execution Instead, moving compute functionality onto COIN execution
skipping to change at line 966 skipping to change at line 967
latencies are essential, there is an even greater need for stable and latencies are essential, there is an even greater need for stable and
deterministic levels of latency, because controllers can generally deterministic levels of latency, because controllers can generally
cope with different levels of latency if they are designed for them, cope with different levels of latency if they are designed for them,
but they are significantly challenged by dynamically changing or but they are significantly challenged by dynamically changing or
unstable latencies. The unpredictable latency of the Internet unstable latencies. The unpredictable latency of the Internet
exemplifies this problem if, for example, off-premise cloud platforms exemplifies this problem if, for example, off-premise cloud platforms
are included. are included.
4.1.3. Existing Solutions 4.1.3. Existing Solutions
Control functionality is traditionally executed on PLCs close to the Control functionality is commonly executed on PLCs close to the
machinery. These PLCs typically require vendor-specific machinery. These PLCs typically require vendor-specific
implementations and are often hard to upgrade and update, which makes implementations and are often hard to upgrade and update, which makes
such control processes inflexible and difficult to manage. Moving such control processes inflexible and difficult to manage. Moving
computations to more freely programmable devices thus has the computations to more freely programmable devices thus has the
potential of significantly improving the flexibility. In this potential of significantly improving the flexibility. In this
context, directly moving control functionality to (central) cloud context, directly moving control functionality to (central) cloud
environments is generally possible, yet only feasible if latency environments is generally possible, yet only feasible if latency
constraints are lenient. constraints are lenient.
Early approaches such as [RÜTH] and [VESTIN] have already shown the Early approaches such as [RÜTH] and [VESTIN] have already shown the
skipping to change at line 1018 skipping to change at line 1019
* RQ 4.1.3: How to find suitable tradeoffs regarding simplicity of * RQ 4.1.3: How to find suitable tradeoffs regarding simplicity of
the control function ("accuracy of the control") and the control function ("accuracy of the control") and
implementation complexity ("implementability")? implementation complexity ("implementability")?
* RQ 4.1.4: How to (dynamically) distribute simplified versions of * RQ 4.1.4: How to (dynamically) distribute simplified versions of
the global (control) function among COIN execution environments? the global (control) function among COIN execution environments?
* RQ 4.1.5: How to (dynamically) compose or recompose the * RQ 4.1.5: How to (dynamically) compose or recompose the
distributed control functions? distributed control functions?
* RQ 4.1.6: Can there be different control levels, e.g., "quite * RQ 4.1.6: Can there be different control levels? For example,
inaccurate & very low latency" (PNDs, deep in the network), "more "quite inaccurate & very low latency" for PNDs deep in the
accurate & higher latency" (more powerful COIN execution network; "more accurate & higher latency" for more powerful COIN
environments, farther away), "very accurate & very high latency" execution environments that are farther away; and "very accurate &
(cloud environments, far away)? very high latency" for cloud environments that are far away.
* RQ 4.1.7: Who decides which control instance is executed and which * RQ 4.1.7: Who decides which control instance is executed and which
information can be used for this decision? information can be used for this decision?
* RQ 4.1.8: How do the different control instances interact and how * RQ 4.1.8: How do the different control instances interact and how
can we define their hierarchy? can we define their hierarchy?
4.1.6. Additional Desirable Capabilities 4.1.6. Additional Desirable Capabilities
In addition to the capabilities driven by the research questions In addition to the capabilities driven by the research questions
skipping to change at line 1106 skipping to change at line 1107
or sampling frequency is often larger than required. Consequently, or sampling frequency is often larger than required. Consequently,
it is likely that more data is transmitted than is needed or desired, it is likely that more data is transmitted than is needed or desired,
prompting the deployment of filtering techniques. For example, COIN prompting the deployment of filtering techniques. For example, COIN
programs deployed in the on-premise network could filter out programs deployed in the on-premise network could filter out
redundant or undesired data before it leaves the premise using simple redundant or undesired data before it leaves the premise using simple
traffic filters, thus reducing the required (upload) bandwidths. The traffic filters, thus reducing the required (upload) bandwidths. The
available sensor data could be scaled down using standard statistical available sensor data could be scaled down using standard statistical
sampling, packet-based sub-sampling (i.e., only forwarding every n-th sampling, packet-based sub-sampling (i.e., only forwarding every n-th
packet), or using filtering as long as the sensor value is in an packet), or using filtering as long as the sensor value is in an
uninteresting range while forwarding with a higher resolution once uninteresting range while forwarding with a higher resolution once
the sensor value range becomes interesting (cf. [KUNZE-SIGNAL]). the sensor value range becomes interesting (cf. [KUNZE-SIGNAL] and
While the former variants are oblivious to the semantics of the [TIRPITZ-REDUCIO]). While the former variants are oblivious to the
sensor data, the latter variant requires an understanding of the semantics of the sensor data, the latter variant requires an
current sensor levels. In any case, it is important that end hosts understanding of the current sensor levels. In any case, it is
are informed about the filtering so that they can distinguish between important that end hosts are informed about the filtering so that
data loss and data filtered out on purpose. they can distinguish between data loss and data filtered out on
purpose.
In practice, the collected data is further processed using various In practice, the collected data is further processed using various
forms of computation. Some of them are very complex or need the forms of computation. Some of them are very complex or need the
complete sensor data during the computation, but there are also complete sensor data during the computation, but there are also
simpler operations that can already be done on subsets of the overall simpler operations that can already be done on subsets of the overall
dataset or earlier on the communication path as soon as all data is dataset or earlier on the communication path as soon as all data is
available. One example is finding the maximum of all sensor values, available. One example is finding the maximum of all sensor values,
which can either be done iteratively at each intermediate hop or at which can either be done iteratively at each intermediate hop or at
the first hop where all data is available. Using expert knowledge the first hop where all data is available. Using expert knowledge
about the exact computation steps and the concrete transmission path about the exact computation steps and the concrete transmission path
skipping to change at line 1163 skipping to change at line 1165
context of general stream processing systems. context of general stream processing systems.
* RQ 4.2.1: How can the overall data processing pipeline be divided * RQ 4.2.1: How can the overall data processing pipeline be divided
into individual processing steps that could then be deployed as into individual processing steps that could then be deployed as
COIN functionality? COIN functionality?
* RQ 4.2.2: How to design COIN programs for (semantic) packet * RQ 4.2.2: How to design COIN programs for (semantic) packet
filtering and which filtering criteria make sense? filtering and which filtering criteria make sense?
* RQ 4.2.3: Which kinds of COIN programs can be leveraged for * RQ 4.2.3: Which kinds of COIN programs can be leveraged for
(pre)processing steps and what complexity can they have? preprocessing and processing steps and what complexity can they
have?
* RQ 4.2.4: How to distribute and coordinate COIN programs? * RQ 4.2.4: How to distribute and coordinate COIN programs?
* RQ 4.2.5: How to dynamically reconfigure and recompose COIN * RQ 4.2.5: How to dynamically reconfigure and recompose COIN
programs? programs?
* RQ 4.2.6: How to incorporate the (pre)processing and filtering * RQ 4.2.6: How to incorporate the preprocessing, processing, and
steps into the overall system? filtering steps into the overall system?
* RQ 4.2.7: How can changes to the data by COIN programs be signaled * RQ 4.2.7: How can changes to the data by COIN programs be signaled
to the end hosts? to the end hosts?
4.2.6. Additional Desirable Capabilities 4.2.6. Additional Desirable Capabilities
In addition to the capabilities driven by the research questions In addition to the capabilities driven by the research questions
above, there are a number of other features that such large-volume above, there are a number of other features that such large-volume
applications could benefit from. In particular, conforming to applications could benefit from. In particular, conforming to
standard application-level syntax and semantics likely simplifies standard application-level syntax and semantics likely simplifies
skipping to change at line 1195 skipping to change at line 1198
the performance of any approach developed based on the above research the performance of any approach developed based on the above research
questions. questions.
4.3. Industrial Safety 4.3. Industrial Safety
4.3.1. Description 4.3.1. Description
Despite an increasing automation in production processes, human Despite an increasing automation in production processes, human
workers are still often necessary. Consequently, safety measures workers are still often necessary. Consequently, safety measures
have a high priority to ensure that no human life is endangered. In have a high priority to ensure that no human life is endangered. In
traditional factories, the regions of contact between humans and conventional factories, the regions of contact between humans and
machines are well defined and interactions are simple. Simple safety machines are well defined and interactions are simple. Simple safety
measures like emergency switches at the working positions are enough measures like emergency switches at the working positions are enough
to provide a good level of safety. to provide a good level of safety.
Modern factories are characterized by increasingly dynamic and Modern factories are characterized by increasingly dynamic and
complex environments with new interaction scenarios between humans complex environments with new interaction scenarios between humans
and robots. Robots can directly assist humans, perform tasks and robots. Robots can directly assist humans, perform tasks
autonomously, or even freely move around on the shop floor. Hence, autonomously, or even freely move around on the shop floor. Hence,
the intersect between the human working area and the robots grows, the intersect between the human working area and the robots grows,
and it is harder for human workers to fully observe the complete and it is harder for human workers to fully observe the complete
skipping to change at line 1289 skipping to change at line 1292
Delivery of content to end users often relies on Content Delivery Delivery of content to end users often relies on Content Delivery
Networks (CDNs). CDNs store said content closer to end users for Networks (CDNs). CDNs store said content closer to end users for
latency-reduced delivery as well as to reduce load on origin servers. latency-reduced delivery as well as to reduce load on origin servers.
For this, they often utilize DNS-based indirection to serve the For this, they often utilize DNS-based indirection to serve the
request on behalf of the origin server. Both of these objectives are request on behalf of the origin server. Both of these objectives are
within scope to be addressed by COIN methods and solutions. within scope to be addressed by COIN methods and solutions.
5.1.2. Characterization 5.1.2. Characterization
From the perspective of this draft, a CDN can be interpreted as a From the perspective of this draft, a CDN can be interpreted as a
(network service level) set of (COIN) programs. These programs (network service level) set of COIN programs. These programs
implement a distributed logic for first distributing content from the implement a distributed logic for first distributing content from the
origin server to the CDN ingress and then further to the CDN origin server to the CDN ingress and then further to the CDN
replication points, which ultimately serve the user-facing content replication points, which ultimately serve the user-facing content
requests. requests.
5.1.3. Existing Solutions 5.1.3. Existing Solutions
CDN technologies have been well described and deployed in the CDN technologies have been well described and deployed in the
existing Internet. Core technologies like Global Server Load existing Internet. Core technologies like Global Server Load
Balancing (GSLB) [GSLB] and Anycast server solutions are used to deal Balancing (GSLB) [GSLB] and Anycast server solutions are used to deal
skipping to change at line 1320 skipping to change at line 1323
Studies such as those in [FCDN] have shown that content distribution Studies such as those in [FCDN] have shown that content distribution
at the level of named content, utilizing efficient (e.g., Layer 2 at the level of named content, utilizing efficient (e.g., Layer 2
(L2)) multicast for replication towards edge CDN nodes, can (L2)) multicast for replication towards edge CDN nodes, can
significantly increase the overall network and server efficiency. It significantly increase the overall network and server efficiency. It
also reduces indirection latency for content retrieval as well as also reduces indirection latency for content retrieval as well as
required edge storage capacity by benefiting from the increased required edge storage capacity by benefiting from the increased
network efficiency to renew edge content more quickly against network efficiency to renew edge content more quickly against
changing demand. Works such as those in [SILKROAD] utilize changing demand. Works such as those in [SILKROAD] utilize
Application-Specific Integrated Circuits (ASICs) to replace server- Application-Specific Integrated Circuits (ASICs) to replace server-
based load balancing with significant cost reductions, thus based load balancing with significant cost reductions, thus
showcasing the potential for in-network CN operations. showcasing the potential for in-network operations.
5.1.4. Opportunities 5.1.4. Opportunities
* Supporting service-level routing of requests (such as service * Supporting service-level routing of requests (such as service
routing in [APPCENTRES]) to specific (COIN) program instances may routing in [APPCENTRES]) to specific COIN program instances may
improve on end-user experience in retrieving faster (and possibly improve on end-user experience in retrieving faster (and possibly
better quality) content. better quality) content.
* COIN instances may also be utilized to integrate service-related * COIN instances may also be utilized to integrate service-related
telemetry information to support the selection of the final telemetry information to support the selection of the final
service instance destination from a pool of possible choices. service instance destination from a pool of possible choices.
* Supporting the selection of a service destination from a set of * Supporting the selection of a service destination from a set of
possible (e.g., virtualized, distributed) choices, e.g., through possible choices (virtualized and distributed) in COIN program
constraint-based routing decisions (see [APPCENTRES]) in (COIN) instances (e.g., through constraint-based routing decisions as
program instances to improve the overall end-user experience by seen in [APPCENTRES]) to improve the overall end-user experience
selecting a "more suitable" service destination over another, by selecting a "more suitable" service destination over another
e.g., avoiding/reducing overload situations in specific service (e.g., avoiding/reducing overload situations in specific service
destinations. destinations).
* Supporting L2 capabilities for multicast (compute interconnection * Supporting L2 capabilities for multicast (compute interconnection
and collective communication in [APPCENTRES]), e.g., through in- and collective communication as seen in [APPCENTRES]) may reduce
network/switch-based replication decisions (and their the network utilization and therefore increase the overall system
optimizations) based on dynamic group membership information, may efficiency. For example, this support may be through in-network,
reduce the network utilization and therefore increase the overall switch-based replication decisions (and their optimizations) based
system efficiency. on dynamic group membership information.
5.1.5. Research Questions 5.1.5. Research Questions
In addition to the research questions in Section 3.1.5: In addition to the research questions in Section 3.1.5:
* RQ 5.1.1: How to utilize L2 multicast to improve on CDN designs? * RQ 5.1.1: How to utilize L2 multicast to improve on CDN designs?
How to utilize COIN capabilities in those designs, such as through How to utilize COIN capabilities in those designs, such as through
on-path optimizations for fanouts? on-path optimizations for fanouts?
* RQ 5.1.2: What forwarding methods may support the required * RQ 5.1.2: What forwarding methods may support the required
multicast capabilities (see [FCDN]) and how could programmable multicast capabilities (see [FCDN]) and how could programmable
COIN forwarding elements support those methods (e.g., extending COIN forwarding elements support those methods (e.g., extending
current SDN capabilities)? current SDN capabilities)?
* RQ 5.1.3: What are the constraints, reflecting both compute and * RQ 5.1.3: What are the constraints, reflecting both compute and
network capabilities, that may support joint optimization of network capabilities, that may support joint optimization of
routing and computing? How could intermediary (COIN) program routing and computing? How could intermediary COIN program
instances support, for example, the aggregation of those instances support, for example, the aggregation of those
constraints to reduce overall telemetry network traffic? constraints to reduce overall telemetry network traffic?
* RQ 5.1.4: Could traffic steering be performed on the data path and * RQ 5.1.4: Could traffic steering be performed on the data path and
per service request (e.g., through (COIN) program instances that per service request (e.g., through COIN program instances that
perform novel routing request lookup methods)? If so, what would perform novel routing request lookup methods)? If so, what would
be performance improvements? be performance improvements?
* RQ 5.1.5: How could storage be traded off against frequent, * RQ 5.1.5: How could storage be traded off against frequent,
multicast-based replication (see [FCDN])? Could intermediary/in- multicast-based replication (see [FCDN])? Could intermediary/in-
network (COIN) elements support the storage beyond current network COIN elements support the storage beyond current endpoint-
endpoint-based methods? based methods?
* RQ 5.1.6: What scalability limits exist for L2 multicast * RQ 5.1.6: What scalability limits exist for L2 multicast
capabilities? How to overcome them, e.g., through (COIN) program capabilities? How to overcome them, e.g., through COIN program
instances serving as stateful subtree aggregators to reduce the instances serving as stateful subtree aggregators to reduce the
needed identifier space (e.g., for bit-based forwarding)? needed identifier space (e.g., for bit-based forwarding)?
5.2. Compute-Fabric-as-a-Service (CFaaS) 5.2. Compute-Fabric-as-a-Service (CFaaS)
5.2.1. Description 5.2.1. Description
We interpret connected compute resources as operating at a suitable We interpret connected compute resources as operating at a suitable
layer, such as Ethernet, InfiniBand, but also at Layer 3 (L3), to layer, such as Ethernet, InfiniBand, but also at Layer 3 (L3), to
allow for the exchange of suitable invocation methods, such as those allow for the exchange of suitable invocation methods, such as those
exposed through verb-based or socket-based APIs. The specific exposed through verb-based or socket-based APIs. The specific
invocations here are subject to the applications running over a invocations here are subject to the applications running over a
selected pool of such connected compute resources. selected pool of such connected compute resources.
Providing such a pool of connected compute resources (e.g., in Providing such a pool of connected compute resources (e.g., in
regional or edge data centers, base stations, and even end-user regional or edge data centers, base stations, and even end-user
devices) opens up the opportunity for infrastructure providers to devices) opens up the opportunity for infrastructure providers to
offer CFaaS-like offerings to application providers, leaving the offer CFaaS-like offerings to application providers, leaving the
choice of the appropriate invocation method to the app and service choice of the appropriate invocation method to the app and service
provider. Through this, the compute resources can be utilized to provider. Through this, the compute resources can be utilized to
execute the desired (COIN) programs of which the application is execute the desired COIN programs of which the application is
composed, while utilizing the interconnection between those compute composed, while utilizing the interconnection between those compute
resources to do so in a distributed manner. resources to do so in a distributed manner.
5.2.2. Characterization 5.2.2. Characterization
We foresee those CFaaS offerings to be tenant-specific, with a tenant We foresee those CFaaS offerings to be tenant-specific, with a tenant
here defined as the provider of at least one application. For this, here defined as the provider of at least one application. For this,
we foresee an interaction between the CFaaS provider and tenant to we foresee an interaction between the CFaaS provider and tenant to
dynamically select the appropriate resources to define the demand dynamically select the appropriate resources to define the demand
side of the fabric. Conversely, we also foresee the supply side of side of the fabric. Conversely, we also foresee the supply side of
skipping to change at line 1433 skipping to change at line 1436
5.2.3. Existing Solutions 5.2.3. Existing Solutions
There exist a number of technologies to build non-local (wide area) There exist a number of technologies to build non-local (wide area)
L2 as well as L3 networks, which in turn allows for connecting L2 as well as L3 networks, which in turn allows for connecting
compute resources for a distributed computational task. For compute resources for a distributed computational task. For
instance, 5G-LAN [SA2-5GLAN] specifies a cellular L2 bearer for instance, 5G-LAN [SA2-5GLAN] specifies a cellular L2 bearer for
interconnecting L2 resources within a single cellular operator. The interconnecting L2 resources within a single cellular operator. The
work in [ICN-5GLAN] outlines using a path-based forwarding solution work in [ICN-5GLAN] outlines using a path-based forwarding solution
over 5G-LAN as well as SDN-based LAN connectivity together with an over 5G-LAN as well as SDN-based LAN connectivity together with an
Information-Centric Network (ICN) based naming of IP and HTTP-level ICN-based naming of IP and HTTP-level resources. This is done in
resources. This is done in order to achieve computational order to achieve computational interconnections, including scenarios
interconnections, including scenarios such as those outlined in such as those outlined in Section 3.1. L2 network virtualization
Section 3.1. L2 network virtualization (see [L2Virt]) is one of the (see [L2Virt]) is one of the methods used for realizing so-called
methods used for realizing so-called "cloud-native" applications for "cloud-based" applications for applications developed with "physical"
applications developed with "physical" networks in mind, thus forming networks in mind, thus forming an interconnected compute and storage
an interconnected compute and storage fabric. fabric.
5.2.4. Opportunities 5.2.4. Opportunities
* Supporting service-level routing of compute resource requests * Supporting service-level routing of compute resource requests
(such as service routing in [APPCENTRES]) may allow for utilizing (such as service routing in [APPCENTRES]) may allow for utilizing
the wealth of compute resources in the overall CFaaS fabric for the wealth of compute resources in the overall CFaaS fabric for
execution of distributed applications, where the distributed execution of distributed applications, where the distributed
constituents of those applications are realized as (COIN) programs constituents of those applications are realized as COIN programs
and executed within a COIN system as (COIN) program instances. and executed within a COIN system as COIN program instances.
* Supporting the constraint-based selection of a specific (COIN) * Supporting the constraint-based selection of a specific COIN
program instance over others (such as constraint-based routing in program instance over others (such as constraint-based routing in
[APPCENTRES]) will allow for optimizing both the CFaaS provider [APPCENTRES]) will allow for optimizing both the CFaaS provider
constraints as well as tenant-specific constraints. constraints as well as tenant-specific constraints.
* Supporting L2 and L3 capabilities for multicast (such as compute * Supporting L2 and L3 capabilities for multicast (such as compute
interconnection and collective communication in [APPCENTRES]) will interconnection and collective communication in [APPCENTRES]) will
allow for decreasing both network utilization but also possible allow for decreasing both network utilization but also possible
compute utilization (due to avoiding unicast replication at those compute utilization (due to avoiding unicast replication at those
compute endpoints), thereby decreasing total cost of ownership for compute endpoints), thereby decreasing total cost of ownership for
the CFaaS offering. the CFaaS offering.
* Supporting the enforcement of trust relationships and isolation * Supporting intermediary COIN program instances to allow for the
policies through intermediary (COIN) program instances, e.g., enforcement of trust relationships and isolation policies (e.g.,
enforcing specific traffic shares or strict isolation of traffic enforcing specific traffic shares or strict isolation of traffic
through differentiated queueing. through differentiated queueing).
5.2.5. Research Questions 5.2.5. Research Questions
In addition to the research questions in Section 3.1.5: In addition to the research questions in Section 3.1.5:
* RQ 5.2.1: How to convey tenant-specific requirements for the * RQ 5.2.1: How to convey tenant-specific requirements for the
creation of the CFaaS fabric? creation of the CFaaS fabric?
* RQ 5.2.2: How to dynamically integrate resources into the compute * RQ 5.2.2: How to dynamically integrate resources into the compute
fabric being utilized for the app execution (those resources fabric being utilized for the app execution (those resources
include, but are not limited to, end-user provided resources), include, but are not limited to, end-user provided resources),
particularly when driven by tenant-level requirements and changing particularly when driven by tenant-level requirements and changing
service-specific constraints? How can those resources be exposed service-specific constraints? How can those resources be exposed
through possible (COIN) execution environments? through possible COIN execution environments?
* RQ 5.2.3: How to utilize COIN capabilities to aid the availability * RQ 5.2.3: How to utilize COIN capabilities to aid the availability
and accountability of resources, i.e., what may be (COIN) programs and accountability of resources, i.e., what may be COIN programs
for a CFaaS environment that in turn would utilize the distributed for a CFaaS environment that in turn would utilize the distributed
execution capability of a COIN system? execution capability of a COIN system?
* RQ 5.2.4: How to utilize COIN capabilities to enforce traffic and * RQ 5.2.4: How to utilize COIN capabilities to enforce traffic and
isolation policies for establishing trust between tenant and CFaaS isolation policies for establishing trust between tenant and CFaaS
provider in an assured operation? provider in an assured operation?
* RQ 5.2.5: How to optimize the interconnection of compute * RQ 5.2.5: How to optimize the interconnection of compute
resources, including those dynamically added and removed during resources, including those dynamically added and removed during
the provisioning of the tenant-specific compute fabric? the provisioning of the tenant-specific compute fabric?
skipping to change at line 1596 skipping to change at line 1599
network programming of individual virtual switches. To our network programming of individual virtual switches. To our
knowledge, no complete solution has been developed for deploying knowledge, no complete solution has been developed for deploying
virtual COIN programs over mobile or data center networks. virtual COIN programs over mobile or data center networks.
5.3.4. Opportunities 5.3.4. Opportunities
Virtual network programming by tenants could bring benefits such as: Virtual network programming by tenants could bring benefits such as:
* A unified programming model, which can facilitate porting COIN * A unified programming model, which can facilitate porting COIN
programs between data centers, 5G networks, and other fixed and programs between data centers, 5G networks, and other fixed and
wireless networks, as well as sharing controller, code and wireless networks, as well as sharing controllers, code, and
expertise. expertise.
* Increasing the level of customization available to customers/ * Increasing the level of customization available to customers/
tenants of mobile networks or data centers compared to typical tenants of mobile networks or data centers compared to typical
configuration capabilities. For example, 5G network evolution configuration capabilities. For example, 5G network evolution
points to an ever-increasing specialization and customization of points to an ever-increasing specialization and customization of
private mobile networks, which could be handled by tenants using a private mobile networks, which could be handled by tenants using a
programming model similar to P4. programming model similar to P4.
* Using network programs to influence underlying network services * Using network programs to influence underlying network services
skipping to change at line 1669 skipping to change at line 1672
6.1.1. Description 6.1.1. Description
There is a growing range of use cases demanding the realization of AI There is a growing range of use cases demanding the realization of AI
training capabilities among distributed endpoints. One such use case training capabilities among distributed endpoints. One such use case
is to distribute large-scale model training across more than one data is to distribute large-scale model training across more than one data
center (e.g., when facing energy issues at a single site or when center (e.g., when facing energy issues at a single site or when
simply reaching the scale of training capabilities at one site, thus simply reaching the scale of training capabilities at one site, thus
wanting to complement training with the capabilities of another or wanting to complement training with the capabilities of another or
possibly many sites). From a COIN perspective, those capabilities possibly many sites). From a COIN perspective, those capabilities
may be realized as (COIN) programs and executed throughout a COIN may be realized as COIN programs and executed throughout a COIN
system, including in PNDs. system, including in PNDs.
6.1.2. Characterization 6.1.2. Characterization
Some solutions may desire the localization of reasoning logic (e.g., Some solutions may desire the localization of reasoning logic (e.g.,
for deriving attributes that better preserve privacy of the utilized for deriving attributes that better preserve privacy of the utilized
raw input data). Quickly establishing (COIN) program instances in raw input data). Quickly establishing COIN program instances in
nearby compute resources, including PNDs, may even satisfy such nearby compute resources, including PNDs, may even satisfy such
localization demands on the fly (e.g., when a particular use is being localization demands on the fly (e.g., when a particular use is being
realized, then terminated after a given time). realized, then terminated after a given time).
Individual training "sites" may not be a data center, but may instead Individual training "sites" may not be a data center, but may instead
consist of powerful, yet stand-along devices that federate computing consist of powerful, yet stand-along devices that federate computing
power towards training a model, captured as "federated training" and power towards training a model, captured as "federated training" and
provided through platforms such as [FLOWER]. Use cases here may be provided through platforms such as [FLOWER]. Use cases here may be
that of distributed training on (user) image data, the training over that of distributed training on (user) image data, the training over
federated social media sites [MASTODON], or others. federated social media sites [MASTODON], or others.
skipping to change at line 1711 skipping to change at line 1714
A number of activities on distributed AI training exist in the area A number of activities on distributed AI training exist in the area
of developing the 5th and 6th generation mobile network, with various of developing the 5th and 6th generation mobile network, with various
activities in the 3GPP Standards Development Organization (SDO) as activities in the 3GPP Standards Development Organization (SDO) as
well as use cases developed for the ETSI MEC initiative mentioned in well as use cases developed for the ETSI MEC initiative mentioned in
previous use cases. previous use cases.
6.1.4. Opportunities 6.1.4. Opportunities
* Supporting service-level routing of training requests (such as * Supporting service-level routing of training requests (such as
service routing in [APPCENTRES]), with AI services being exposed service routing in [APPCENTRES]) with AI services being exposed to
to the network, where (COIN) program instances may support the the network, and where COIN program instances may support the
selection of the most suitable service instance based on control selection of the most suitable service instance based on control
plane information, e.g., on AI worker compute capabilities, being plane information (e.g., on AI worker compute capabilities being
distributed across (COIN) program instances. distributed across COIN program instances).
* Supporting the collective communication primitives, such as all- * Supporting the collective communication primitives, such as all-
to-all, scatter-gather, utilized by the (distributed) AI workers to-all and scatter-gather, utilized by the (distributed) AI
to increase the overall network efficiency, e.g., through avoiding workers may increase the overall network efficiency (e.g., through
endpoint-based replication or even directly performing, e.g., avoiding endpoint-based replication or even directly performing
reduce, collective primitive operations in (COIN) program collective primitive operations in COIN program instances placed
instances placed in topologically advantageous places. in topologically advantageous places).
* Supporting collective communication between multiple instances of * Supporting collective communication between multiple instances of
AI services (i.e., (COIN) program instances) may positively impact AI services (i.e., COIN program instances) may positively impact
network but also compute utilization by moving from unicast network but also compute utilization by moving from unicast
replication to network-assisted multicast operation. replication to network-assisted multicast operation.
6.1.5. Research Questions 6.1.5. Research Questions
In addition to the research questions in Section 3.1.5: In addition to the research questions in Section 3.1.5:
* RQ 6.1.1: What are the communication patterns that may be * RQ 6.1.1: What are the communication patterns that may be
supported by collective communication solutions, where those supported by collective communication solutions, where those
solutions directly utilize (COIN) program instance capabilities solutions directly utilize COIN program instance capabilities
within the network (e.g., reduce in a central (COIN) program within the network (e.g., perform Reduce options in a central COIN
instance)? program instance)?
* RQ 6.1.2: How to achieve scalable collective communication * RQ 6.1.2: How to achieve scalable collective communication
primitives with rapidly changing receiver sets (e.g., where primitives with rapidly changing receiver sets (e.g., where
training workers may be dynamically selected based on energy training workers may be dynamically selected based on energy
efficiency constraints [GREENAI])? efficiency constraints [GREENAI])?
* RQ 6.1.3: What COIN capabilities may support the collective * RQ 6.1.3: What COIN capabilities may support the collective
communication patterns found in distributed AI problems? communication patterns found in distributed AI problems?
* RQ 6.1.4: How to support AI-specific invocation protocols, such as * RQ 6.1.4: How to support AI-specific invocation protocols, such as
MPI or Remote Direct Memory Access (RDMA)? MPI or Remote Direct Memory Access (RDMA)?
* RQ 6.1.5: What are the constraints for placing (AI) execution * RQ 6.1.5: What are the constraints for placing (AI) execution
logic in the form of (COIN) programs in certain logical execution logic in the form of COIN programs in certain logical execution
points (and their associated physical locations), including PNDs, points (and their associated physical locations), including PNDs,
and how to signal and act upon them? and how to signal and act upon them?
7. Preliminary Categorization of the Research Questions 7. Preliminary Categorization of the Research Questions
This section describes a preliminary categorization of the research This section describes a preliminary categorization of the research
questions illustrated in Figure 4. A more comprehensive analysis has questions illustrated in Figure 4. A more comprehensive analysis has
been initiated by members of the COINRG community in [USE-CASE-AN] been initiated by members of the COINRG community in [USE-CASE-AN]
but has not been completed at the time of writing this memo. but has not been completed at the time of writing this memo.
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+ Applicability Areas + + Applicability Areas +
+ .............................................................+ + .............................................................+
+ Transport | App | Data | Routing & | (Industrial) + + Transport | App | Data | Routing & | (Industrial) +
+ | Design | Processing | Forwarding | Control + + | Design | Processing | Forwarding | Control +
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+ Distributed Computing FRAMEWORKS and LANGUAGES to COIN + + Distributed Computing Frameworks and Languages to COIN +
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+ ENABLING TECHNOLOGIES for COIN + + Enabling Technologies for COIN +
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+ VISION(S) for COIN + + Vision(s) for COIN +
+--------------------------------------------------------------+ +--------------------------------------------------------------+
Figure 4: Research Questions Categories Figure 4: Research Questions Categories
The "VISION(S) for COIN" category is about defining and shaping the The "Vision(s) for COIN" category is about defining and shaping the
exact scope of COIN. In contrast to the "ENABLING TECHNOLOGIES" exact scope of COIN. In contrast to the "Enabling Technologies"
category, these research questions look at the problem from a more category, these research questions look at the problem from a more
philosophical perspective. In particular, the questions center philosophical perspective. In particular, the questions center
around where to perform computations, which tasks are suitable for around where to perform computations, which tasks are suitable for
COIN, for which tasks COIN is suitable, and which forms of deploying COIN, for which tasks COIN is suitable, and which forms of deploying
COIN might be desirable. This category includes the research COIN might be desirable. This category includes the research
questions 3.1.8, 3.2.1, 3.3.5, 3.3.6, 3.3.7, 5.3.3, 6.1.1, and 6.1.3. questions 3.1.8, 3.2.1, 3.3.5, 3.3.6, 3.3.7, 5.3.3, 6.1.1, and 6.1.3.
The "ENABLING TECHNOLOGIES for COIN" category digs into what The "Enabling Technologies for COIN" category digs into what
technologies are needed to enable COIN, which of the existing technologies are needed to enable COIN, which of the existing
technologies can be reused for COIN, and what might be needed to make technologies can be reused for COIN, and what might be needed to make
the "VISION(S) for COIN" a reality. In contrast to the "VISION(S) the "Vision(s) for COIN" a reality. In contrast to the "Vision(s)
for COIN", these research questions look at the problem from a for COIN", these research questions look at the problem from a
practical perspective (e.g., by considering how COIN can be practical perspective (e.g., by considering how COIN can be
incorporated in existing systems or how the interoperability of COIN incorporated in existing systems or how the interoperability of COIN
execution environments can be enhanced). This category includes the execution environments can be enhanced). This category includes the
research questions 3.1.7, 3.1.8, 3.2.3, 4.2.7, 5.1.1, 5.1.2, 5.1.6, research questions 3.1.7, 3.1.8, 3.2.3, 4.2.7, 5.1.1, 5.1.2, 5.1.6,
5.3.1, 6.1.2, and 6.1.3. 5.3.1, 6.1.2, and 6.1.3.
The "Distributed Computing FRAMEWORKS and LANGUAGES to COIN" category The "Distributed Computing Frameworks and Languages to COIN" category
focuses on how COIN programs can be deployed and orchestrated. focuses on how COIN programs can be deployed and orchestrated.
Central questions arise regarding the composition of COIN programs, Central questions arise regarding the composition of COIN programs,
the placement of COIN functions, the (dynamic) operation and the placement of COIN functions, the (dynamic) operation and
integration of COIN systems as well as additional COIN system integration of COIN systems as well as additional COIN system
properties. Notably, COIN diversifies general distributed computing properties. Notably, COIN diversifies general distributed computing
platforms such that many COIN-related research questions could also platforms such that many COIN-related research questions could also
apply to general distributed computing frameworks. This category apply to general distributed computing frameworks. This category
includes the research questions 3.1.1, 3.2.4, 3.3.1, 3.3.2, 3.3.3, includes the research questions 3.1.1, 3.2.4, 3.3.1, 3.3.2, 3.3.3,
3.3.5, 4.1.1, 4.1.4, 4.1.5, 4.1.8, 4.2.1, 4.2.4, 4.2.5, 4.2.6, 4.3.3, 3.3.5, 4.1.1, 4.1.4, 4.1.5, 4.1.8, 4.2.1, 4.2.4, 4.2.5, 4.2.6, 4.3.3,
5.2.1, 5.2.2, 5.2.3, 5.2.5, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, and 5.2.1, 5.2.2, 5.2.3, 5.2.5, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, and
skipping to change at line 1869 skipping to change at line 1872
systems typically work on unencrypted data and often customize packet systems typically work on unencrypted data and often customize packet
payload, while concepts such as homomorphic encryption could serve as payload, while concepts such as homomorphic encryption could serve as
workarounds, allowing PNDs to perform simple operations on the workarounds, allowing PNDs to perform simple operations on the
encrypted data without having access to it. All these approaches encrypted data without having access to it. All these approaches
introduce the same or very similar security implications as any introduce the same or very similar security implications as any
middlebox operating on unencrypted traffic or having access to middlebox operating on unencrypted traffic or having access to
encryption: a middlebox can itself have malicious intentions (e.g., encryption: a middlebox can itself have malicious intentions (e.g.,
because it got compromised or the deployment of functionality offers because it got compromised or the deployment of functionality offers
new attack vectors to outsiders). new attack vectors to outsiders).
However, similar to middlebox deployments, risks for privacy and data However, similar to middlebox deployments, risks for privacy and the
exposure have to be carefully considered in the context of the risk of data exposure have to be carefully considered in the context
concrete deployment. For example, exposing data to an external of the concrete deployment. For example, exposing data to an
operator for mobile application offloading leads to a significant external operator for mobile application offloading leads to a
privacy loss of the user in any case. In contrast, such privacy significant privacy loss of the user in any case. In contrast, such
considerations are not as relevant for COIN systems where all privacy considerations are not as relevant for COIN systems where all
involved entities are under the same control, such as in an involved entities are under the same control, such as in an
industrial context. Here, exposed data and functionality can instead industrial context. Here, exposed data and functionality can instead
lead to stolen business secrets or the enabling of DoS attacks, for lead to stolen business secrets or the enabling of DoS attacks, for
example. Hence, even in fully controlled scenarios, COIN example. Hence, even in fully controlled scenarios, COIN
intermediaries, and middleboxes in general, are ideally operated in a intermediaries, and middleboxes in general, are ideally operated in a
least-privilege mode, where they have exactly those permissions to least-privilege mode, where they have exactly those permissions to
read and alter payload that are necessary to fulfill their purpose. read and alter payload that are necessary to fulfill their purpose.
Research on granting middleboxes access to secured traffic is only in Research on granting middleboxes access to secured traffic is only in
its infancy, and a variety of different approaches are proposed and its infancy, and a variety of different approaches are proposed and
skipping to change at line 1915 skipping to change at line 1918
process. Moreover, such deployments can become central entities process. Moreover, such deployments can become central entities
that, if paralyzed (e.g., through extensive requests), can be that, if paralyzed (e.g., through extensive requests), can be
responsible for large-scale outages. In particular, some deployments responsible for large-scale outages. In particular, some deployments
could be used to amplify DoS attacks. Similar to other middlebox could be used to amplify DoS attacks. Similar to other middlebox
deployments, these potential risks must be considered when deploying deployments, these potential risks must be considered when deploying
COIN functionality and may influence the selection of suitable COIN functionality and may influence the selection of suitable
security protocols. security protocols.
Additional system-level security considerations may arise from Additional system-level security considerations may arise from
regulatory requirements imposed on COIN systems overall, stemming regulatory requirements imposed on COIN systems overall, stemming
from regulation regarding, lawful interception, data localization, or from regulation regarding lawful interception, data localization, or
AI use, for example. These requirements may impact, for example, the AI use, for example. These requirements may impact, for example, the
manner in which (COIN) programs may be placed or executed in the manner in which COIN programs may be placed or executed in the
overall system, who can invoke certain (COIN) programs in what PND or overall system, who can invoke certain COIN programs in what PND or
COIN device, and what type of (COIN) program can be run. These COIN device, and what type of COIN program can be run. These
considerations will impact the design of the possible implementing considerations will impact the design of the possible implementing
protocols but also the policies that govern the execution of (COIN) protocols but also the policies that govern the execution of COIN
programs. programs.
9. IANA Considerations 9. IANA Considerations
This document has no IANA actions. This document has no IANA actions.
10. Conclusion 10. Conclusion
This document presents use cases gathered from several application This document presents use cases gathered from several application
domains that can and could profit from capabilities that are provided domains that can and could profit from capabilities that are provided
skipping to change at line 2026 skipping to change at line 2029
server-load-balancing-gslb/>. server-load-balancing-gslb/>.
[ICN-5GC] Ravindran, R., Suthar, P., Trossen, D., Wang, C., and G. [ICN-5GC] Ravindran, R., Suthar, P., Trossen, D., Wang, C., and G.
White, "Enabling ICN in 3GPP's 5G NextGen Core White, "Enabling ICN in 3GPP's 5G NextGen Core
Architecture", Work in Progress, Internet-Draft, draft- Architecture", Work in Progress, Internet-Draft, draft-
ravi-icnrg-5gc-icn-04, 31 May 2019, ravi-icnrg-5gc-icn-04, 31 May 2019,
<https://datatracker.ietf.org/doc/html/draft-ravi-icnrg- <https://datatracker.ietf.org/doc/html/draft-ravi-icnrg-
5gc-icn-04>. 5gc-icn-04>.
[ICN-5GLAN] [ICN-5GLAN]
Trossen, D., Wang, C., Robitzsch, S., Reed, M., AL-Naday, Trossen, D., Robitzsch, S., Essex, U., AL-Naday, M., and
M., and J. Riihijarvi, "IP-based Services over ICN in 5G J. Riihijarvi, "Internet Services over ICN in 5G LAN
LAN Environments", Work in Progress, Internet-Draft, Environments", Work in Progress, Internet-Draft, draft-
draft-trossen-icnrg-ip-icn-5glan-00, 6 June 2019, trossen-icnrg-internet-icn-5glan-04, 1 October 2020,
<https://datatracker.ietf.org/doc/html/draft-trossen- <https://datatracker.ietf.org/doc/html/draft-trossen-
icnrg-ip-icn-5glan-00>. icnrg-internet-icn-5glan-04>.
[KUNZE-APPLICABILITY] [KUNZE-APPLICABILITY]
Kunze, I., Glebke, R., Scheiper, J., Bodenbenner, M., Kunze, I., Glebke, R., Scheiper, J., Bodenbenner, M.,
Schmitt, R., and K. Wehrle, "Investigating the Schmitt, R., and K. Wehrle, "Investigating the
Applicability of In-Network Computing to Industrial Applicability of In-Network Computing to Industrial
Scenarios", 2021 4th IEEE International Conference on Scenarios", 2021 4th IEEE International Conference on
Industrial Cyber-Physical Systems (ICPS), pp. 334-340, Industrial Cyber-Physical Systems (ICPS), pp. 334-340,
DOI 10.1109/icps49255.2021.9468247, May 2021, DOI 10.1109/icps49255.2021.9468247, May 2021,
<https://doi.org/10.1109/icps49255.2021.9468247>. <https://doi.org/10.1109/icps49255.2021.9468247>.
skipping to change at line 2129 skipping to change at line 2132
[RÜTH] Rüth, J., Glebke, R., Wehrle, K., Causevic, V., and S. [RÜTH] Rüth, J., Glebke, R., Wehrle, K., Causevic, V., and S.
Hirche, "Towards In-Network Industrial Feedback Control", Hirche, "Towards In-Network Industrial Feedback Control",
Proceedings of the 2018 Morning Workshop on In-Network Proceedings of the 2018 Morning Workshop on In-Network
Computing, pp. 14-19, DOI 10.1145/3229591.3229592, August Computing, pp. 14-19, DOI 10.1145/3229591.3229592, August
2018, <https://doi.org/10.1145/3229591.3229592>. 2018, <https://doi.org/10.1145/3229591.3229592>.
[SA2-5GLAN] [SA2-5GLAN]
3GPP-5glan, "SP-181129, Work Item Description, 3GPP-5glan, "SP-181129, Work Item Description,
Vertical_LAN(SA2), 5GS Enhanced Support of Vertical and Vertical_LAN(SA2), 5GS Enhanced Support of Vertical and
LAN Services", 3GPP , 2021, LAN Services", 3GPP , 2021,
<http://www.3gpp.org/ftp/tsg_sa/TSG_SA/Docs/SP- <https://www.3gpp.org/ftp/tsg_sa/TSG_SA/TSGS_82/Docs/SP-
181120.zip>. 181120.zip>.
[SarNet2021] [SarNet2021]
Glebke, R., Trossen, D., Kunze, I., Lou, D., Ruth, J., Glebke, R., Trossen, D., Kunze, I., Lou, D., Ruth, J.,
Stoffers, M., and K. Wehrle, "Service-based Forwarding via Stoffers, M., and K. Wehrle, "Service-based Forwarding via
Programmable Dataplanes", 2021 IEEE 22nd International Programmable Dataplanes", 2021 IEEE 22nd International
Conference on High Performance Switching and Routing Conference on High Performance Switching and Routing
(HPSR), pp. 1-8, DOI 10.1109/hpsr52026.2021.9481814, June (HPSR), pp. 1-8, DOI 10.1109/hpsr52026.2021.9481814, June
2021, <https://doi.org/10.1109/hpsr52026.2021.9481814>. 2021, <https://doi.org/10.1109/hpsr52026.2021.9481814>.
skipping to change at line 2159 skipping to change at line 2162
Rodriguez, P., and P. Steenkiste, "Multi-Context TLS Rodriguez, P., and P. Steenkiste, "Multi-Context TLS
(mcTLS): Enabling Secure In-Network Functionality in TLS", (mcTLS): Enabling Secure In-Network Functionality in TLS",
ACM SIGCOMM Computer Communication Review, vol. 45, no. 4, ACM SIGCOMM Computer Communication Review, vol. 45, no. 4,
pp. 199-212, DOI 10.1145/2829988.2787482, August 2015, pp. 199-212, DOI 10.1145/2829988.2787482, August 2015,
<https://doi.org/10.1145/2829988.2787482>. <https://doi.org/10.1145/2829988.2787482>.
[Stoyanov] Stoyanov, R. and N. Zilberman, "MTPSA: Multi-Tenant [Stoyanov] Stoyanov, R. and N. Zilberman, "MTPSA: Multi-Tenant
Programmable Switches", Proceedings of the 3rd P4 Workshop Programmable Switches", Proceedings of the 3rd P4 Workshop
in Europe, pp. 43-48, DOI 10.1145/3426744.3431329, in Europe, pp. 43-48, DOI 10.1145/3426744.3431329,
December 2020, December 2020,
<https://eng.ox.ac.uk/media/6354/stoyanov2020mtpsa.pdf>. <https://dl.acm.org/doi/10.1145/3426744.3431329>.
[Sultana] Sultana, N., Sonchack, J., Giesen, H., Pedisich, I., Han, [Sultana] Sultana, N., Sonchack, J., Giesen, H., Pedisich, I., Han,
Z., Shyamkumar, N., Burad, S., DeHon, A., and B. T. Loo, Z., Shyamkumar, N., Burad, S., DeHon, A., and B. T. Loo,
"Flightplan: Dataplane Disaggregation and Placement for P4 "Flightplan: Dataplane Disaggregation and Placement for P4
Programs", Programs",
<https://flightplan.cis.upenn.edu/flightplan.pdf>. <https://flightplan.cis.upenn.edu/flightplan.pdf>.
[TIRPITZ-REDUCIO]
Tirpitz, L., Kunze, I., Niemietz, P., Gerhardus, A. K.,
Bergs, T., Wehrle, K., and S. Geisler, "Reducio: Data
Aggregation and Stability Detection for Industrial
Processes Using In-Network Computing", DEBS '25:
Proceedings of the 19th ACM International Conference on
Distributed and Event-based Systems, pp. 98-109,
DOI 10.1145/3701717.3730547, June 2025,
<https://doi.org/10.1145/3701717.3730547>.
[TLSSURVEY] [TLSSURVEY]
de Carné de Carnavalet, X. and P. van Oorschot, "A Survey de Carné de Carnavalet, X. and P. van Oorschot, "A Survey
and Analysis of TLS Interception Mechanisms and and Analysis of TLS Interception Mechanisms and
Motivations: Exploring how end-to-end TLS is made 'end-to- Motivations: Exploring how end-to-end TLS is made 'end-to-
me' for web traffic", ACM Computing Surveys, vol. 55, no. me' for web traffic", ACM Computing Surveys, vol. 55, no.
13s, pp. 1-40, DOI 10.1145/3580522, July 2023, 13s, pp. 1-40, DOI 10.1145/3580522, July 2023,
<https://doi.org/10.1145/3580522>. <https://doi.org/10.1145/3580522>.
[TOSCA] Rutkowski, M., Ed., Lauwers, C., Ed., Noshpitz, C., Ed., [TOSCA] Rutkowski, M., Ed., Lauwers, C., Ed., Noshpitz, C., Ed.,
and C. Curescu, Ed., "TOSCA Simple Profile in YAML Version and C. Curescu, Ed., "TOSCA Simple Profile in YAML Version
skipping to change at line 2242 skipping to change at line 2255
Email: kunze@comsys.rwth-aachen.de Email: kunze@comsys.rwth-aachen.de
Klaus Wehrle Klaus Wehrle
RWTH Aachen University RWTH Aachen University
Ahornstr. 55 Ahornstr. 55
D-52074 Aachen D-52074 Aachen
Germany Germany
Email: wehrle@comsys.rwth-aachen.de Email: wehrle@comsys.rwth-aachen.de
Dirk Trossen Dirk Trossen
Huawei Technologies Duesseldorf GmbH DaPaDOT Tech UG (haftungsbeschränkt)
Riesstr. 25C Palestrinastr. 7A
D-80992 Munich D-80639 Munich
Germany Germany
Email: Dirk.Trossen@Huawei.com Email: dirk@dapadot-tech.eu
Marie-Jose Montpetit Marie-Jose Montpetit
McGill University SLICES-RI
680 Sherbrooke Street W. Paris
Montreal H3A 3R1 France
Canada Email: marie-jose.montpetit@slices-ri.eu
Email: marie-jose.montpetit@mcgill.ca
Xavier de Foy Xavier de Foy
InterDigital Communications, LLC InterDigital Communications, LLC
1000 Sherbrooke West 1000 Sherbrooke West
Montreal H3A 3G4 Montreal H3A 3G4
Canada Canada
Email: xavier.defoy@interdigital.com Email: xavier.defoy@interdigital.com
David Griffin David Griffin
University College London University College London
 End of changes. 102 change blocks. 
241 lines changed or deleted 253 lines changed or added

This html diff was produced by rfcdiff 1.48.