ARDIA Framework and AI-enhanced Service Orchestration

The complexity and heterogeneity of distributed computing infrastructures pose significant challenges for management and service deployment. Heading to more complex environments, the development of intelligent and self-managed orchestration mechanisms is key to optimizing resource utilization for present and future workloads.

ARDIA (A Resource reference model for Data-Intensive Applications) provides the appropriate abstractions and common description models that are required in order to promote resource interoperability, transparent deployment and mobility of applications’ workload and data across the overall computing continuum. These abstractions facilitate the automatic and transparent combining of hardware and software resources across multiple locations based on the applications’ needs.
ARDIA follow a three-distinct dimensions design. The first is the lifecycle dimension that captures different stages that heterogeneous resources undergo, including creation, operation, evolution, dissolution, and metamorphosis. The second is the environment dimension that covers the structure of deployments and their relationships, the components or resources, the processes within an application and the behaviour with regards to principles, policies or rules. The third dimension is the intent, which covers SERRANO’s support for higher levels of abstraction in service definitions that are translated to automated and proactive adjustments based on service requirements.

The AI-enhanced Service Orchestrator (AI-SO) analyses the particular application’s needs and automatically determines the most appropriate platform type for deployment. Moreover it translates the high-level requirements into specific infrastructure related operational constraints and orchestration objectives through the usage of neural networks and intelligent AI/ML forecasting methods (including deep reinforcement learning techniques) taking also into account the telemetry data collected so far. The individual resource and performance requirements are provided to the Resource Orchestrator (Figure 1) so that it can accordingly allocate the appropriate edge, cloud and HPC resources and coordinate the necessary supplemental actions (e.g. transfer required data) so that the applications’ requirements specified by the user are satisfied.

Figure 1

You might be interested in …