Due to recent technological advances, the deployment of service chains on computational resources from the cloud up to the edge has become a reality, creating a continuum of virtual resources. However, next-generation applications add further stringent requirements that current networks cannot support. Bandwidth requirements for extended reality applications will rise well above 1Tbps, while their interactive experiences require sub-millisecond latency. Also, autonomous cars need ultra-low latency communications with reliability levels up to 99.99999%. These upcoming applications are calling for considerable advancements towards cloud-native service-based architectures. Thus, efficient orchestration strategies have become even more important, and machine learning methods are viewed as a potential solution capable of dynamically meeting the real-time requirements of these applications. However, further research is needed to confirm that these methods can indeed replace existing mechanisms. In addition, novel networking paradigms have opened several possibilities for improving network performance, including higher flexibility and scalability. This research project tackles this challenge by pushing towards distributed cloud-native infrastructures to support low latency service delivery by integrating novel orchestration practices (i.e., Reinforcement Learning) with recent networking trends (i.e., Segment Routing, Intent-based Networking).