The blending of network connectivity and advanced computing capabilities, both in the cloud as well as at the network edge, paves the way to the advent of self-driving networks, thanks to a comprehensive and data-rich view of the underlying network components. In this talk we first cover achievements of the current AI-assisted cloud-native architecture, which already empowers operators to deploy networks able to automatically adapt according to high-level goals, and next discuss challenges to move to, as well as benefits coming from, a fully AI-native network architecture. In particular, the talk will overviewing several key ingredients of network AI (explainable, automated, deployable and sustainable), and particular emphasize on the explainability/XAI angle.
The blending of network connectivity and advanced computing capabilities, both in the cloud as well as at the network edge, paves the way to the advent of self-driving networks, thanks to a comprehensive and data-rich view of the underlying network components. While the current AI-assisted cloud-native architecture already empowers operators to automate part of the network task, challenges remain to more systematically exploit AI in networks. In particular, the talk will emphasize need for explainability capabilities (XAI), and walkthrough over a gallery of XAI methods on the concrete networking use-case of traffic classification, including recent work that will appear at KDD’23.
Reviewing the need for explainable AI models, especially in light of upcoming regulation (European AI Act) and comparing explainable AI methods on concrete network-related use-cases pertaining to traffic management.
Moderated by D. Rossi (Huawei, France), panelists
Albert Cabellos (UPC), Georg Carle (TUM),
Laurent Ciavaglia (Rakuten), Erol Gelenbe (IITiS) and Diego Perino (T+D)
will discuss emerging trends, challenges and opportunities in making AI a first class citizen of the future network architecture, where AI is no longer an afterthought (as in +AI) but is rather the starting point of the equation (aka AI+) leading to the confluence of networking and AI and a more inter-wined evolution path.
The blending of network connectivity and advanced computing capabilities, both in the cloud as well as at the network edge, paves the way to the advent of self-driving networks, thanks to a comprehensive and data-rich view of the underlying network components. In this talk we first cover achievements of the current AI-assisted cloud-native architecture, which already empowers operators to deploy networks able to automatically adapt according to high-level goals, and next discuss challenges to move to, as well as benefits coming from, a fully AI-native network architecture.
In the session lanuching Tech Arena – Huawei’s global tech competition for higher education students, we present the Huawei R+D activities in France, with special attention to network and AI
Panel experts S. Banerjee (Director of Research, VMware, USA) S. Majumdar (Intel AI Lab, USA), D. Rossi (Huawei, France) and D. Pei (Tsinghua University, China) will discuss Emerging Trends in AI/ML and Implications for Networking Research, in a panel organized by A. Gosain (Northeastern University, USA) and moderated by N. Himayat (Intel Labs).
ICT Infrastructures and the future Internet including 5G and beyond technologies, NFV, IoT, Cloud/Edge are the main enabling factors contributing to the digital transformation of our society. Their design, deployment and operation are critical, calling for a scientific instrument to support the research in this domain for computer science and infrastructure researchers, as well as for data driven scientific applications involving interdisciplinary aspects. In this talk, we will cover how the SLICES Research Infrastructure (RI) can benefit to AI research, and further expose constrain and requirements that AI empowered algorithms impose on the SLICES RI, to help steering the Next Generation of ICT Research Infrastructures.
We selected distinguished experts from industry and academia with large expertise in the research area of networking, who will provide us interesting insights.
Moderated by Filip De Turck (Ghent University-imec, Belgium) the expert panel member composed by Chih-Lin I (China Mobile, China), Dario Rossi (Huawei, France) Hanan Lutfiyya (The University of Western Ontario, Canada) and
Giuseppe Bianchi (University of Roma Tor Vergata, Italy) will focus on the identification and discussion of the challenges to be tackled during the next decade.
We expect lively discussions and count on all IFIP Networking 2020 participants (PhD students and experienced researchers from industry and academia) to ask questions and share their viewpoints.
The World Wide Web is still among the most prominent Internet applications. While the Web landscape has been in perpetual movement since the very beginning, these last few years have witnessed some noteworthy proposals such as SPDY, HTTP/2 and QUIC, which profoundly reshape the application-layer protocols family. To measure the impact of such changes, going beyond the classic W3C notion of page load time, a number of Web performance metrics has been proposed (such as SpeedIndex, Above-The-Fold and variants). At the same time, there is still a limited amount of understanding on how these metrics correlate with the user perception (e.g., such as user ratings, user-perceived page load time, etc.). In this talk, we discuss the state of the art in metrics and models for Web performance evaluation, and their correlation with user experience through several real-world studies. Addidtional information, software and datasets are available at https://webqoe.telecom-paristech.fr
In this talk we discuss challenges and opportunities when Artificial Intelligence is used to increase automation of each phase of the network lifecycle, including network and service configuration, fault detection and repair, and control & optimization of network resources.
The ecosystem of Internet and enterprise network applications has always been changing at a very fast pace: as applications are essentially pieces of software, this allow the fast introduction of new killer applications in the ecosystem, the extinction of others, and a continuous evolution
of the remaining ones. As the ultimate goal of any application is to offer some kind of entertainment or a business service to its end users, the objective measurement of the quality of experience (QoE) delivered to the users has been a quite active research field. From the network viewpoint, an accurate measurement of the user QoE empowers the infrastructure with the ability to more effectively control the usage, and better arbitrate the sharing, of its available resources: going beyond QoS management, which can at most improve network efficiency, QoE management allows to improve the benefits perceived by its user.
While QoE-driven network management is a desirable objective, it also raises significant challenges. Clearly, QoE management can improve over classic QoS management only as long as the QoE inference process is accurate. However QoE inference is complex due to continuous protocol evolution, application changes and related trends such as traffic encryption, etc. As such, the use of techniques such as machine learning to let define data-driven QoE models is as appealing as it is challenging. One of these reasons lays in the fact that QoE estimation need to involves humans in the learning loop, to provides useful “labels” as input to the learning algorithm. The process to collect these labels is cumbersome, and exposed to a range of human behaviors – that can seldom be described with adjectives as unexpected, random, funny, counter-intuitive, adversarial. Yet, humans are key in this QoE loop, of which they are both the starting point, as well as the ultimate goal.
In this keynotes, we discuss these challenges taking an ever-green Internet application (namely, Web browsing) as the main leitmotive and to provide examples of practical relevance.
[STW-19]
Rossi, Dario,
Sheding a (deep learning) light on the operational obscurity of nowadays encrypted traffic
Huawei Strategic Technical Workshop (STW’19) ,
may,
2019
Invited
Often, advances in hardware have been at the base of success of new computing paradigm, algorithms and techniques. This is, e.g., what might happen in the future for quantum computers, and what has recently happened in the field of Artificial Intelligence (AI) and Neural Networks in particular, whose potential has been fully unleashed by commoditization of general-purpose GPUs.
In this keynote, we first introduce recent hardware advances, namely a new family of specialized architectures that are promising enablers for a deeper integration of AI at all network segments, particularly at the edge, and layers of the stack. We next discuss challenges and opportunities that are specific to the networking domain, putting them in perspective with advances in other fields