Note: This page lists all seminars organized over the years with colleagues at Telecom ParisTech and more recently at LINCS. Having served the seminar for more than 10years,
I happily passed the honour to other colleagues – so this page is mostly ment for historical reasons.
To cope with blockchain inconsistencies, like double-spending, developers started building upon Byzantine fault-tolerant (BFT) consensus. At first it seems reasonable because consensus can be effective at totally ordering transactions into a chain.
Unfortunately, these two problems are different. In short, the blockchain problem aims at totally ordering blocks of transactions issued by a large set of potentially misbehaving internet machines, whereas the consensus problem aims at deciding upon one of the values proposed by typically fewer but non-misbehaving machines.
In this talk, I will describe the blockchain consensus problem, a variant of the classic consensus definition, that was recently defined for blockchains. I will present the Red Belly Blockchain, a blockchain that does not use BFT consensus as a black box but rather solves the blockchain consensus problem and illustrate empirically the benefits it offers.
Vincent Gramoli is the Head of the Concurrent Systems Research Group at the University of Sydney and a Senior Researcher at Data61-CSIRO. He received his PhD from University of Rennes and his Habilitation from UPMC Sorbonne University. Prior to this, he was affiliated with Cornell University and EPFL. His research interest is in distributed computing.
Next generation wireless architectures are expected to enable slices of shared wireless infrastructure which are customized to specific mobile operators/services. Given infrastructure costs and the stochastic nature of mobile services’ spatial loads, it is highly desirable to achieve efficient statistical multiplexing amongst network slices. This talk will introduce a simple dynamic resource sharing policy which allocates a ‘share’ of a pool of (distributed) resources to each slice– Share Constrained Proportionally Fair (SCPF). We give a characterization of the achievable performance gains over static slicing, showing higher gains when a slice’s spatial load is more ’imbalanced’ than, and/or ’orthogonal’ to, the aggregate network load. Under SCPF, traditional network dimensioning translates to a coupled share dimensioning problem, addressing the existence of a feasible share allocation given slices’ expected loads and performance requirements. We provide a solution to robust share dimensioning for SCPF-based network slicing. Slices may wish to unilaterally manage their users’ performance via admission control which maximizes their carried loads subject to performance requirements. We show this can be modeled as a “traffic shaping" game with an achievable Nash equilibrium. Under high loads the equilibrium is explicitly characterized, as are the gains in the carried load under SCPF vs. static slicing. Detailed simulations of a wireless infrastructure supporting multiple slices with heterogeneous mobile loads show the fidelity of our models and range of validity of our high load equilibrium analysis.
Joint work with Pablo Caballero, Jiaxiao Zheng, Albert Banchs, Xavier Costa, Seung Jun Baek
We propose a novel algorithm for the hierarchical clustering of graphs. The algorithm is a simple but key modification of the Louvain algorithm based on some sliding-resolution scheme. In particular, the most relevant clusterings, corresponding to different resolutions, are identified. The algorithm is parameter-free and has a complexity of O(m log n) where n is the number of nodes and m the number of edges. Numerical experiments on both synthetic and real datasets show the accuracy and scalability of the algorithm. Joint work with Thomas Bonald (Telecom ParisTech), Alexandre Hollocou (Inria), Alexis Galland (Inria)
Apple and Samsung have been fighting patent battles around the world. Come learn about the mathematics at the heart of one of these battles, the error correcting codes used in 3G communication. We will give a gentle introduction to coding theory, explain why this caused a legal battle, and we will conclude by describing why President Obama ultimately vetoed the ruling by the court (the first time a president had used that veto power in nearly 30 years!).
Jim Davis, Professor of Mathematics at the University of Richmond since 1988, does research in combinatorics and error correcting codes. He spent two years working for Hewlett-Packard and he has 15 patents stemming from that work. He has published more than 50 papers, including several with undergraduates as coauthors (one of those also has Sihem Mesnager as coauthor). Outside of mathematics, he is an avid squash and bridge player.
In this presentation, a completely revisited secure data storage and transmission scheme is
presented. This new scheme is based on an agnostic selective encryption methods combined
with fragmentation and dispersion. Discrete Wavelet Transform (DWT) is used as the
preprocessing method to determine the selection process before encryption and to
guarantee the agnostic property of the protection. Also, General Purpose Graphic Processing
Unit (GPGPU) is deployed to allow performance to be very efficient compared with full
encryption algorithms. During the protection process, the data is fragmented to be private
fragments with small storage space and public fragments with large storage space. Then the
dispersion step will distribute the fragments to different storage environments aiming at
utilizing the convenient and inexpensive cloud storage space while provide security for the
data. This design, with practical experimentations on different hardware configurations,
provides strong level of protection and good performance at the same time plus flexible
storage dispersion schemes.
Understanding the performance of a pool of servers is crucial for proper dimensioning. One of the main challenges is to take into account the complex interactions between servers that are pooled to process jobs. In particular, a job can generally not be processed by any server of the cluster due to various constraints like data locality. In this paper, we represent these constraints by some assignment graph between jobs and servers. We present a recursive approach to computing performance metrics like mean response times when the server capacities are shared according to balanced fairness. While the computational cost of these formulas can be exponential in the number of servers in the worst case, we illustrate their practical interest by introducing broad classes of pool structures that can be exactly analyzed in polynomial time. This extends considerably the class of models for which explicit performance metrics are accessible.
This is a joint work with Thomas Bonald and Fabien Mathieu.
Recent advances in mobile and wearable technologies bring new opportunities for positive computing that aims to use computing technologies to support human well-being and potential. In particular, mobile and wearable technologies help us to better understand and deal with various threats to the well-being of people, ranging from technology overuse such as productivity loss to mental problems such as stress and depression. In this talk, I’ll present some of my recent study results about leveraging mobile and wearable sensing for positive computing: (1) mobile usage data analysis to understand and identify problematic smartphone usage behaviors; (2) wearable sensor data analysis to quantify the patterns of smartwatch wearing behaviors; (3) motion sensor data analysis to infer dangerous trails for walking with crowdsensing techniques.
Dr. Uichin Lee is an associate professor in the Department of Industrial and Systems Engineering, and in the Graduate School of Knowledge Service Engineering at Korea Advanced Institute of Science and Technology (KAIST). He received the B.S. in computer engineering from Chonbuk National University in 2001, the M.S. degree in computer science from KAIST in 2003, and the Ph.D. degree in computer science from UCLA in 2008. He continued his studies at UCLA as a post-doctoral research scientist (2008-2009) and then worked for Alcatel-Lucent Bell Labs (NJ, USA) as a member of technical staff until 2010. His research interests include social computing systems and mobile/pervasive computing.
Although many advantages are expected from the provision of services for M2M communications in cellular networks such as extended coverage, security, robust management and lower deployment costs, coexistence with a large number of M2M devices is still an important challenge, in part due to the difficulty in allowing simultaneous access. Although the random access procedure in LTE-A is adequate for H2H communications, it is necessary to optimize this mechanism when M2M communications are considered. One of the proposed methods in 3GPP is Access Class Barring (ACB), which is able to reduce the number of simultaneous users contending for access. However, it is still not clear how to adapt its parameters in dynamic or bursty scenarios, such as those that appear when M2M communications are introduced.We propose a dynamic mechanism for ACB based on reinforcement learning, which aims to reduce the impact that M2M communications have over H2H communications, while at the same time ensuring that the KPIs for all users are in acceptable levels.
We consider vector fixed point (FP) equations in large dimensional spaces involving random variables, and study their realization-wise solutions. We have an underlying directed random graph, that defines the connections between various components of the FP equations. Existence of an edge between nodes i,j implies the i-th FP equation depends on the j-th component. We consider a special case where any component of the FP equation depends upon an appropriate aggregate of that of the random “neighbor” components. We obtain finite dimensional limit FP equations (in a much smaller dimensional space), whose solutions approximate the solution of the random FP equations for almost all realizations, in the asymptotic limit (number of components increase).
Our techniques are different from the traditional mean-field methods, which deal with stochastic FP equations in the space of distributions to describe the stationary distributions of the systems. In contrast our focus is on realization-wise FP solutions. We apply the results to study systemic risk in a large financial heterogeneous network with many small institutions and one big institution, and demonstrate some interesting phenomenon.
Kavitha Veeraruna completed my PhD with Indian Institute of Science in 2007. I have done two postdocs, one at TIFR Bangalore and then with Prof. Eitan Altman at INRIA. Currently I am working as an Assistant Professor at IIT Bombay Mumbai.
Many foundational issues in quantum mechanics frustrate the experts and keep them up at night. Unfortunately, when these concepts reach the greater public, they are often misrepresented and oversimplified. I want to share with you some bewildering tales of quantum mechanics, unabridged, straight from the source. In this talk: The measurement problem, delayed-choice erasure, interaction-free measurements, non-locality, non-contextuality. This is not the usual cat-in-the-box talk. Get ready for the real s**t!
With the increasing interest in the use of millimeter wave bands for 5G cellular systems comes renewed interest in resource sharing. Properties of millimeter wave bands such as massive bandwidth, highly directional antennas, high penetration loss, and susceptibility to shadowing, suggest technical advantages and potential cost savings due to spectrum and infrastructure sharing. However, resource sharing can also affect market dynamics of price and demand, potentially reducing service provider profit or consumer surplus. In our work, detailed simulations of millimeter wave and microwave networks are connected to economic models of markets for network goods, where consumers’ utility depends on the size of the network. The results suggest that the greater technical sharing gains in mmWave networks can actually lead to less incentive for resource sharing in some markets.
Predicting the user Quality-of-Experience (QoE) for an internet application is a highly desirable feature in a number of situations. However, directly measuring user QoE is extremely challenging, due to its subjective nature. One common approach is to measure Quality-of-Service (QoS) metrics and use it to predict user QoE. In this presentation, we show our efforts into understanding and predicting user QoE.
First, we will discuss state-of-the-art quality assessment methodologies. Then, we will present several approaches to model user QoE from QoS metrics. Finally, we show a few examples where we model user QoE from QoS metrics.
Datacenter flow routing relies on packet-in messages generated by switches and directed to the controller upon new flows arrivals. The SDN controller reacts to packet-in events by installing forwarding rules in the memory of all switches along an optimized path. Since flow arrival rates can peak to millions per second, a relevant constraint is represented by the scarce amount of TCAM memory on switches. We assume that if a routing table is full, a flow will be routed on a default, sub-optimal path. A viable solution is to restrict the optimized traffic to critical flows: this corresponds to performing traffic segmentation, prioritizing larger flows over smaller flows. However, choosing the optimal threshold to discriminate optimized flows from non-optimized flows is not a trivial task. This work focuses on learning the optimal flow segmentation policy under memory constraints. We formulate this task as a Markov decision problem. Based on the structure of the optimal stationary policy, we propose a reinforcement learning algorithm tailored to the problem at hand. We prove it is adaptive, correct and has a polynomial time complexity. Finally, numerical experiments characterize the performance of the algorithm.
Joint work with Francesco de Pellegrini (Fondazione Bruno Kessler), Lorenzo Maggi (Huawei Algorithmic and Mathematical Sciences Lab, Paris)
Non-orthogonal multiple access (NOMA) is a promising radio access technology for 5G. It allows several users to transmit on the same frequency and time resource by performing power-domain multiplexing. At the receiver side, successive interference cancellation (SIC) is applied to mitigate interference among the multiplexed signals. In this way, NOMA can outperform orthogonal multiple access schemes used in conventional cellular networks in terms of spectral efficiency and allows more simultaneous users. We investigate the computational complexity of joint subcarrier and power allocation problems in multi-carrier NOMA systems. In this talk, we will show that these problems are strongly NP-hard for a large class of objective functions, namely the weighted generalized means of the individual data rates. This class covers the popular weighted sum-rate, proportional fairness, harmonic mean and max-min fairness utilities. This result implies that the optimal power and subcarrier allocation cannot be computed in polynomial time in the general case, unless P = NP. Nevertheless, we present some heuristics and show their performance through numerical results.
It is increasingly acknowledged that we are on the verge of the next technological and industrial revolution, driven by the digitization and interconnection of physical elements and infrastructure under the control of advanced intelligent systems. Therefore, there will be a new era of automation that should result in enhanced productivity. However, such productivity enhancements have been anticipated before, e.g, the revolution commonly known as the "information age", and have failed to materialize. One cas asks if the productivity increases observed following the earlier industrial revolutions a one-time aberration that will not be repeated in the new digital age. In this talk, we attempt to address this question by a quantitative analysis of the prior productivity jumps and their physical technological origins, and extend this analysis to the latent set of analogous digital technologies. This approach leverages a correlation we observe between the diffusion of key infrastructure technologies and productivity jumps. We use data from 1875 to 1985 from the US and data from 1950 to 2015 in China and India to demonstrate this non-linear correlation. Amongst the predictions of the model, we see a possible productivity jump in the United States in the 2030s timeframe when the aggregate of the constituent digital technologies passes the tipping point of 50-60 percent penetration.
Joint work with Marcus Weldon, Sanjay Kamat, Subra Prakash
Large interconnected power systems are often operated by independent system operators, each having its own operating region within which internal resources are used economically. The operating regions are connected physically by tie lines that allow one operator to import from or export to its neighbors for better utilization of overall system resources.
In this talk, we present some recent results on the optimal scheduling of interchange among independently operated power systems in the presence of uncertainty. Based on the idea of classic coordinate descent method, we propose an interface-by-interface scheduling algorithm which is compatible to the current industrial practice in the electric grid of U.S. and applied to synchronous and asynchronous scheduling schemes
Streaming of video contents over the Internet is
experiencing an unprecedented growth. While video permeates
every application, it also puts tremendous pressure in the network –
to support users having heterogeneous accesses and expecting
high quality of experience, in a furthermore cost-effective
manner. In this context, Future Internet (FI) paradigms, such
as Information Centric Networking (ICN), are particularly well
suited to not only enhance video delivery at the client (as in the
DASH approach), but to also naturally and seamlessly extend
video support deeper in the network functions.
In this talk, we contrast ICN and TCP/IP with an experimental
approach, where we employ several state-of-the-art
DASH controllers (PANDA, AdapTech, and BOLA) on an ICN vs
TCP/IP network stack. Our campaign, based on tools which we
developed and made available as open-source software, includes
multiple clients (homogeneous vs heterogeneous mixture, synchronous
vs asynchronous arrivals), videos (up to 4K resolution),
channels (e.g., DASH profiles, emulated WiFi and LTE, real
3G/4G traces), and levels of integration with an ICN network
(i.e., vanilla NDN, wireless loss detection and recovery at the
access point, load balancing). Our results clearly illustrate, as
well as quantitatively assess, benefits of ICN-based streaming,
warning about potential pitfalls that are however easy to avoid.
Overall, our work constitutes a first milestone
towards a fair and complete assessment of fully fledged NDN
video distribution systems, and their comparison with state of
the art CDN technologies implemented over a classic TCP/IP
stack, which is part of our ongoing work and that we briefly discuss.
Part of this work will appear in IEEE Transactions on Multimedia, October 2017 issue.
Cauchemar des chères têtes blondes qui s’initient à la première des "quatre opérations" de l’arithmétique, les retenues sont aussi un casse-tête pour les ingénieurs qui conçoivent les circuits d’ordinateurs. Il y a une vingtaine d’années, j’avais expliqué, dans ce même séminaire, que la notion d’addition molle permettait de résoudre cette difficulté et comment elle se rattachait à la théorie des automates finis.
Je reviens sur ces retenues, pour les compter cette fois, ou plutôt pour évaluer le "nombre amorti" de retenues impliquées par la fonction successeur. Si ce problème est sans mystère, ou presque, dans les systèmes de numération classiques, écriture des entiers en base entière, sa généralisation naturelle aux systèmes de numération non standard, voire abstraits, se révèle étonnamment complexe. Elle met en jeu des résultats sophistiqués de la théorie des fonctions rationnelles comme de la théorie ergodique que j’essaierai de présenter.
We study adaptive matching in expert systems, as an instance of adaptive sequential hypothesis testing. Examples of such systems include Q&A platforms, crowdsourcing, image classification. Consider a system that receives tasks or jobs to be classified into one of a set of given types. The system has access to a set of workers, or experts, and the expertise of a worker is defined by the jobs he is able to classify and the error in his response. This active sequential hypothesis testing problem was first addressed by Chernoff in 1959, whereby experts to be queried are selected according to how much information they provide. In this talk we will begin with an overview of past work on this topic, then consider our model where we assume access to less fine-grained information about the expertise of workers. We propose a gradient-based algorithm, show its optimality and through numerical results show that it outperforms the Chernoff-like algorithms.
Today’s consumer Internet traffic is transmitted on a best effort basis without taking into account any quality requirements. The backbone and the wireless access networks lack service guarantees for the predominant consumer Internet traffic, with video streaming being responsible for 60% of the traffic share. As services rely on interconnected networks, service performance and thus user satisfaction depend on network performance. Consequently, it is of utmost importance to understand the relationships between user-perceived Quality of Experience (QoE), and network performance as described by Quality of Service (QoS) parameters.
However, in general the network does neither know which Internet applications it is carrying nor which quality requirements have to be met. To be able to meet the demands of applications and users in the network, QoE-aware service and network management is proposed which requires an information exchange between applications and network. From a conceptual perspective, this requires three basic research steps: modeling, monitoring, and optimization of QoE.
The first part of this talk covers QoE management and monitoring approaches, while the second part highlights concrete examples in order to demonstrate how QoE-aware network and service management may improve the user-perceived quality of video streaming.
Thomas Zinner received his Diploma and Ph.D. degrees in computer science from the University of Wurzburg, Germany, in 2007 and 2012, respectively. His habilitation thesis titled “Performance evaluation of novel network and application paradigms and management approaches was
finished in 2017. He is heading the Next Generation Networks Research Group at the Chair of Communication Networks, University of Wurzburg and has published more than 80 research papers in major conferences and journals, receiving six best paper and best student paper awards.
His main research interests cover video streaming, QoE management, network softwarization and performance evaluation.
Homepage: http://www3.informatik.uni-wuerzburg.de/staff/zinner/
Linear Dynamical Systems are used to model several evolving network processes in biology, physical systems as well as financial networks. Estimation of the topology of a linear dynamical system is thus of interest for learning and control in diverse domains. This talk presents a useful framework for topology estimation for general linear dynamical networks (loopy or radial) using time-series measurements of nodal states. The learning framework utilizes multivariate Wiener filtering to unravel the interaction between fluctuations in states at different nodes and identifies operational edges by considering the phase response of the elements of the multivariate Wiener filter. The benefit derived by considering samples from ambient dynamics v/s steady state measurements will be described along with extensions, discussion of sample and computational complexity and open questions. In particular, we discuss one application related to topology estimation in power grids using samples of nodal voltage phase angles collected from the swing equations.
The Internet of Things is one of the hottest topics being debated today across industries worldwide. Successful IoT deployment needs sustainable business models that are well understood. This seminar will present such business models, its rationale and the analytical models used for their discussion. The proposals are drawn from the research conducted recently by the speaker.
The business models are based on service providers that intermediate between wireless sensor networks that sense data and users that benefit from enhanced service build around the sensed data. Scenarios where the service provider operates without and with competition are modelled, analyzed and discussed. The modelling and analysis tools are borrowed from microeconomics, optimization and game theory.
The seminar will present some scenarios and the models employed for the analysis of each one. Then it will proceed to present the analysis performed and, finally, it will discuss the results of the analysis.
Bio: Dr. Luis Guijarro received the M.E. and Ph.D. degrees in Telecommunication engineering from the Universitat Politecnica de Valencia (UPV), Spain. He is an Associate Professor in Telecommunications Policy at the UPV. He researched in traffic management in ATM networks and in e-Government. His current research is focused on economic modelling of telecommunication service provision. He has contributed in the areas of peer-to-peer interconnection, cognitive radio networks, search engine neutrality, and wireless sensor networks. His personal webpage is http://personales.upv.es/lguijar
Formal security analyses for block-chains derived the famous "honest majority" assumption: the system well-behaves only if the majority of the hashing power is in the hands of honest miners. This result has been proven without taking into consideration economic aspects related to miner’s incentives and their rational behavior. Other works analyzed selfish mining with respect to incentives and concluded that the majority assumption is not enough under certain conditions. All these studies make the implicit assumption that the health of the system is only in the hands of miners, but what about if all the users leave the system. Since the system is composed of miners and users and since these two sets are disjoint in practice, block chain’s health strictly relates to the capacity of promoting honest participation both for miners and users. In this talk we explore the relationship between participation and the notion of fairness. In general terms, fairness can be defined as the overall satisfaction of justified expectations of the participants. If a block-chain system is fair, then its participants will tend to stay in the system, otherwise they may leave. We analyze user and miner strategies that are currently possible in BitCoin-like implementations, and we show that existing block-chain systems are not fair for their users. We finally discuss new ways for improvement arguing that novel redistribution mechanisms are needed to promote participation of all nodes.
Internet users in many countries around the world are subject to various forms of censorship and information control. Despite its widespread nature, however, measuring Internet censorship on a global scale has remained an elusive goal. Internet censorship is known to vary across time and within regions (and Internet Service Providers) within a country. To capture these complex dynamics, Internet censorship measurement must be both continuous and distributed across a large number of vantage points. To date, gathering such information has required recruiting volunteers to perform measurements from within countries of interest; this approach does not permit collection of continuous measurements, and it also does not permit collection from a large number of measurement locations; it may also put the people performing the measurements at risk. Over the past four years, we have developed a collection of measurement techniques to surmount the limitations of these conventional approaches. In this talk, I will describe three such techniques: (1) Encore, a tool that performs cross-origin requests to measure Web filtering; (2) Augur, a tool that exploits side-channel information in the Internet Protocol (IP) to measure filtering using network-level access control lists; and (3) a tool to measure DNS filtering using queries through open DNS resolvers. These three tools allow us—for the first time ever—to characterize Internet censorship continuously, from hundreds of countries around the world, at different layers of the network protocol stack. each of these techniques involves both technical and ethical challenges. I will describe some of the challenges that we faced in designing and implementing these tools, how we tackled these challenges, our experiences with measurements to date, and our plans for the future. Long term, our goal is to collaborate with social scientists to bring data to bear on a wide variety of questions concerning Internet censorship and information control; I will conclude with an appeal to cross-disciplinary work in this area and some ideas for how computer scientists and social scientists might work together on some of these pressing questions going forward.
This research is in collaboration with Sam Burnett, Roya Ensafi, Paul Pearce, Ben Jones, Frank Li, and Vern Paxson.
Nick Feamster is a professor in the Computer Science Department at Princeton University and the Deputy Director of the Princeton University Center for Information Technology Policy (CITP). Before joining the faculty at Princeton, he was a professor in the School of Computer Science at Georgia Tech. He received his Ph.D. in Computer science from MIT in 2005, and his S.B. and M.Eng. degrees in Electrical Engineering and Computer Science from MIT in 2000 and 2001, respectively. Nick’s research focuses on improving the security and performance of communications networks with systems that draw on advanced Internet measurement, data analytics, and machine learning. Nick is an ACM Fellow. Among other awards, he received the Presidential Early Career Award for Scientists and Engineers (PECASE) for his contributions to cybersecurity, the Technology Review 35 "Top Young Innovators Under 35" award, and the ACM SIGCOMM Rising Star Award.
The surge of mobile data traffic forces network operators to cope with capacity shortage. The deployment of small cells in 5G networks is meant to increase radio access capacity. Mobile edge computing technologies, in turn, can be used in order to manage dedicated cache space memory inside the radio access network, thus reducing latency and backhaul traffic. Mobile network operators will be able to provision content providers with new caching services to enhance the quality of experience of their customers on the move. Cache memory in the mobile edge network will become a shared resource.
We study a competitive caching scheme where contents are stored at given price set by the mobile network operator. We first formulate a resource allocation problem for a tagged content provider seeking to minimize the expected missed cache rate. The optimal caching policy is derived accounting for popularity and availability of contents, the spatial distribution of small cells, and the caching strategies of competing content providers. It is showed to induce a specific order on contents to be cached based on their popularity and availability.
Next, we study a game among content providers in the form of a generalized Kelly mechanism with bounded strategy sets and heterogeneous players. Existence and uniqueness of the Nash equilibrium are proved. Numerical results validate and characterize the performance of the system and the convergence to the Nash equilibrium.
Francesco De Pellegrini (Fondazione Bruno Kessler) is currently Chief Scientist of the Distributed Computing and Information Processing group
(DISCO). He serves as lecturer at the university of Trento for the course of Wireless Networks (Master Degree course). His technical research interests are location detection, multirate systems, routing, wireless mesh networks, VoIP, Ad Hoc and Delay Tolerant Networks. His interests are algorithms on graphs, stochastic control of networks and game theory. Francesco was the Vice General-chair for the first edition of ICST Robocomm and is one of the promoters of COMPLEX 2012. Francesco has been General Co-Chair for the 2012 edition of IEEE NetGCoop, and TPC Chair for the 2014 edition, a conference focused on game theory and control for networked systems. He is actingor acted as Project Manager for the several industry-funded projects. Francesco has been the Coordinator for the FET EU Project CONGAS, whose focus is on the Dynamics and COevolution in Multi-Level Strategic INteraction GAmeS. He has received the best paper award at WiOPT 2014 and at NetGCoop 2016.
He is currently the Principal Investigator of the H2020 FET Resource AuctiOning Engine for the Mobile Digital Market (ROMA).
In this talk I will present mean-field games with discrete state spaces (aka discrete mean-field games) and I will analyze these games in continuous and discrete time, over finite as
well as infinite time horizons, and in particular the minimal assumptions under which the existence of a
mean-field equilibrium can be guaranteed, namely continuity of the cost and of the drift, which mimics nicely the case of classical games. Besides, I will also study
the convergence of the equilibria of N-player games to mean-field
equilibria, both in discrete and continuous time. I will define a class of strategies over which any
Nash equilibrium converges to a mean-field equilibrium when the number of
players goes to infinity. I will also exhibit equilibria outside this
class that do not converge to mean-field equilibria.
In discrete time this non-convergence phenomenon implies that
the Folk theorem does not scale to the mean field limit. An example, based on the celebrated SIR infection model where vaccination is allowed, will help me illustrate all these results.
In a future where implanted and wearable sensors continuously report physiological data, how to power such sensors without battery replacement and developing new communication methods that are safe for human tissues remain as unexplored frontier. This talk describes recent advances in designing systems and protocols for contactless wireless charging using using RF waves and the use of weak electrical currents for data transfer among body implants. It explores the fundamental tradeoffs that exist between achieving high data and recharging rates, constructive mixing of radiated signals through beamforming, MAC protocols that allow differential data/energy access, and the promise of simultaneous transfer of data over energy. The talk shall also cover some latest results in charging sensors purely from ambient cellular signals in Boston city. On the implant side, advances in channel modeling and topology placement strategies are discussed with experimental results on low-overhead data transfers from an embedded implant to an on-skin relay node.
Prof. Kaushik R. Chowdhury received the PhD degree from the Georgia Institute of Technology, Atlanta, in 2009. He is currently Associate Professor and Faculty Fellow in the Electrical and Computer Engineering Department at Northeastern University, Boston, MA. He was awarded the Presidential Early Career Award for Scientists and Engineers (PECASE) in Jan. 2017 by President Obama, the DARPA Young Faculty Award in 2017, the Office of Naval Research Director of Research Early Career Award in 2016, and the NSF CAREER award in 2015. He received multiple best paper awards, including the IEEE ICC conference, in 2009, ’12 and ’13, and ICNC conference in 2013. His h-index is 33 and his works have gathered over 8000 citations. He is presently a co-director of the Platforms for Advanced Wireless Research project office, a joint 100 million public-private investment partnership between the US NSF and wireless industry consortium to create city-scale testing platforms.
20170614
14h @
Nowhere
unavailable!, Seminar Room No seminar
As we move away from fossil fuels toward renewable energy sources such as solar and wind, inexpensive energy storage technologies are required. This is so since renewable energy sources, such as solar and wind, are intermittent. An alternative to batteries –which are quite expensive– is "smart loads". With appropriate intelligence, the power consumption of air conditioning –and many other loads– can be varied around a baseline. This variation is analogous to the charging and discharging of a battery. Loads equipped with such intelligence have the potential to provide a vast and inexpensive source of energy storage. Two principal challenges in creating a reliable virtual battery from millions of consumer loads include (1) maintaining consumers’ Quality of Service (QoS) within strict bounds, and (2) coordinating the actions of loads with minimal communication to ensure accurate reference tracking by the aggregate. This talk summarizes our work in addressing these two challenges. In particular, I will present in some detail a method for ensuring reliable coordination in the presence of uncertainty without inter-load communication, which is a frequently used method for coordination. In contrast, our approach achieves reliable coordination by using a combination of one-way broadcast from the grid operator, and locally obtained frequency measurements at the loads that provide valuable global information. Web: http://web.mae.ufl.edu/pbarooah Email: pbarooah@ufl.edu
Prabir Barooah is an Associate Professor of Mechanical and Aerospace Engineering at the University of Florida, where he has been since 2007. He received the Ph.D. degree in Electrical and Computer Engineering in 2007 from the University of California, Santa Barbara. From 1999 to 2002 he was a research engineer at United Technologies Research Center, East Hartford, CT. He received the M. S. degree in Mechanical Engineering from the University of Delaware in 1999 and the B. Tech. degree in Mechanical Engineering from the Indian Institute of Technology, Kanpur, in 1996. Dr. Barooah is the winner of Endeavour Executive Fellowship (2016) from the Australian Government, ASEE-SE (American Society of Engineering Education, South East Section) outstanding researcher award (2012), NSF CAREER award (2010), General Chairs’ Recognition Award for Interactive papers at the 48th IEEE Conference on Decision and Control (2009), best paper award at the 2nd Int. Conf. on Intelligent Sensing and Information Processing (2005), and NASA group achievement award (2003).
Conventional cellular wireless networks were designed with the purpose
of providing high throughput for the user and high capacity for the
service provider, without any provisions of energy efficiency. As a
result, these networks have an enormous Carbon footprint. For example,
only in the United States, the Carbon footprint of the cellular
wireless industry is equal to that of about 3/4 million cars. In
addition, the cellular network is highly inefficient and therefore a
large part of the energy dissipated is wasted.
In this presentation, we first analyze the energy dissipation in
cellular wireless networks and point to sources of major
inefficiency. We also discuss how much more mobile traffic is expected
to increase so that this Carbon footprint will increase tremendously
more. We then discuss potential sources of improvement at the physical
layer as well as at higher layers of the communication protocol
hierarchy. For the physical layer, we discuss new modulation formats
and new device technologies and what they may bring in terms of energy
efficiency gain. At higher layers, considering that most of the energy
inefficiency in cellular wireless networks is at the base stations, we
discuss multi-tier networks and point to the potential of exploiting
mobility patterns in order to use base station energy judiciously. We
discuss link adaptation and point to why energy efficiency, and not
power efficiency should be pursued and what it means for the choice of
link rates. We show how much gain is possible by energy-efficient link
rate adaptation. We describe the gains due to the exploitation of
nonuniform traffic in space, relays and cooperation, device-to-device
communications, multiple antenna technques, and in particular
coordinated multipoint and massive MIMO, sleeping modes for the base
stations, the techniques of cell breathing and cell zooming, the
energy trap problem for the mobile terminals, and the potential
approaches for video that provide energy efficiency. We also review
several survey papers and books published on this topic.
By a consideration of the combination of all potential gains, we
conclude that an improvement in energy consumption in cellular
wireless networks by orders of magnitude is possible. The lecture will
present in detail where to concentrate research to achieve the largest
gains.
Ender Ayanoglu received the M.S. and Ph.D. degrees from Stanford
University, Stanford, CA in 1982 and 1986, respectively, in electrical
engineering. He was with the Communications Systems Research
Laboratory, part of AT&T Bell Laboratories, Holmdel, NJ until 1996,
and Bell Labs, Lucent Technologies until 1999. From 1999 until 2002,
he was a Systems Architect at Cisco Systems, Inc., San Jose, CA. Since
2002, he has been a Professor in the Department of Electrical
Engineering and Computer Science, University of California, Irvine,
Irvine, CA, where he served as the Director of the Center for
Pervasive Communications and Computing and held the Conexant-Broadcom
Endowed Chair during 2002-2010.
His past accomplishments include invention of the 56K modems,
characterization of wavelength conversion gain in Wavelength Division
Multiplexed (WDM) systems, and diversity coding, a technique for link
failure recovery in communication networks employing erasure coding in
1990, prior to the publication of the first papers on network
coding. During 2000-2001, he served as the founding chair of the
IEEE-ISTO Broadband Wireless Internet Forum (BWIF), an industry
standards organization which developed and built a broadband wireless
system employing Orthogonal Frequency Division Multiplexing (OFDM) and
a Medium Access Control (MAC) algorithm that provides
Quality-of-Service (QoS) guarantees. This system is the precursor of
today’s Fourth Generation (4G) cellular wireless systems such as
WiMAX, LTE, and LTEAdvanced.
From 1993 until 2014 Dr. Ayanoglu was an Editor, and since January
2014 is a Senior Editor of the IEEE Transactions on Communications. He
served as the Editor-in-Chief of the IEEE Transactions on
Communications from 2004 to 2008. From 1990 to 2002, he served on the
Executive Committee of the IEEE Communications Society Communication
Theory Committee, and from 1999 to 2001, was its Chair. Currently, he
is serving as the founding Editor-in-Chief of the IEEE Transactions on
Green Communications and Networking. Dr. Ayanoglu is the recipient of
the IEEE Communications Society Stephen O. Rice Prize Paper Award in
1995 and the IEEE Communications Society Best Tutorial Paper Award in
1997. He has been an IEEE Fellow since 1998.
A significant portion of today’s network traffic is due to recurring downloads of popular content (e.g., movies, video clips and daily news). It has been observed that replicating the latter in caches installed at the network edge -close to the users- can drastically reduces network bandwidth usage and improve content access delay. The key technical issues in emergent caching architectures relate to the following questions: where to install caches, what content and for how long to cache, and how to manage the routing of content within the network. In this talk, an overview of caching is provided, starting with generic architectures that can be applied to different networking environments, and moving to emerging architectures that enable caching in wireless networks (e.g., at cellular base stations and WiFi access points). Novel challenges arise in the latter due to the inadequacy of wireless resources and their broadcast nature, the frequent hand-offs between different cells for mobile users, as well as the specific requirements of different types of user applications, such as video streaming. We will present our recent results on innovative caching approaches that (i) harvest idle user-owned cache space and bandwidth, (ii) leverage the broadcast nature of the wireless medium to serve concurrent requests for content (iii) exploit the regularity of user mobility patterns, and (iv) apply advanced video encoding techniques to support multiple video qualities (e.g., screen sizes, frame rates, or signal-to-noise ratio (SNR) qualities). These are cutting-edge approaches that can achieve significant performance and cost-reduction benefits over the state-of-the-art methods.
Leandros Tassiulas is the John C. Malone Professor of Electrical Engineering at Yale University. His research interests are in the field of computer and communication networks with emphasis on fundamental mathematical models and algorithms of complex networks, architectures and protocols of wireless systems, sensor networks, novel internet architectures and experimental platforms for network research. His most notable contributions include the max-weight scheduling algorithm and the back-pressure network control policy, opportunistic scheduling in wireless, the maximum lifetime approach for wireless network energy management, and the consideration of joint access control and antenna transmission management in multiple antenna wireless systems. Dr. Tassiulas is a Fellow of IEEE (2007). His research has been recognized by several awards including the IEEE Koji Kobayashi computer and communications award 2016, the inaugural INFOCOM 2007 Achievement Award "for fundamental contributions to resource allocation in communication networks," the INFOCOM 1994 best paper award, a National Science Foundation (NSF) Research Initiation Award (1992), an NSF CAREER Award (1995), an Office of Naval Research Young Investigator Award (1997) and a Bodossaki Foundation award (1999). He holds a Ph.D. in Electrical Engineering from the University of Maryland, College Park (1991). He has held faculty positions at Polytechnic University, New York, University of Maryland, College Park, and University of Thessaly, Greece.
Amazon EC2 and Google Compute Engine (GCE) have recently introduced a new class of virtual machines called "burstable" instances that are cheaper than even the smallest traditional/regular instances. These lower prices come with reduced average capacity and increased variance. Using measurements from both EC2 and GCE, we identify key idiosyncrasies of resource capacity dynamism for burstable instances that set them apart from other instance types. Most importantly, certain resources for these instances appear to be regulated by deterministic, though in one case unorthodox, token bucket like mechanisms. We find widely different types of disclosures by providers of the parameters governing these regulation mechanisms: full disclosure (e.g., CPU capacity for EC2 t2 instances), partial disclosure (e.g., CPU capacity and remote disk IO bandwidth for GCE shared-core instances), or no disclosure (network bandwidth for EC2 t2 instances). A tenant modeling these variations as random phenomena (as some recent work suggests) might make sub-optimal procurement and operation decisions. We present modeling techniques for a tenant to infer the properties of these regulation mechanisms via simple offline measurements. We also present two case studies of how certain memcached workloads might benefit from our modeling when operating on EC2 by: (i) temporal multiplexing of multiple burstable instances to achieve the CPU or network bandwidth (and thereby throughput) equivalent of a more expensive regular EC2 instance, and (ii) augmenting cheap but low availability in-memory storage offered by spot instances with backup of popular content on burstable instances. Work in collaboration with Cheng Wang, Bhuvan Urgaonkar and Neda Nasiriani
The rapid advances in sensors and ultra-low power wireless communication has enabled a new generation of wireless sensor networks: Wireless Body Area Networks (WBAN). WBAN is a recent challenging area in the health monitoring domain. There are several concerns in this area ranging from energy efficient communication to designing delays efficient protocols that support nodes dynamic induced by human body mobility. WBAN is a promising technology and shall be increasingly necessary for monitoring, diagnosing and treating populations.
In this work we are interested in WBAN where sensors are placed on the body. In this type of networks sensors performances are tremendously influenced by the human body mobility. Thus, we investigate different communication primitives using a realistic human body mobility and channel models issued from a recent research on biomedical and health informatics.
This talk provides a discussion of one of the central design challenges associated with next-generation 5G wireless systems - that of effectively converging 3GPP-based mobile networks with the global Internet. Although the trend towards "flat" IP-based architectures for cellular networks is well under way with LTE, significant architectural evolution will be needed to achieve the goal of supporting the needs of mobile devices and applications as "first-class" services on the Internet. Several emerging mobility service scenarios including hetnet/small cell, multi-network access, mobile cloud, IoT (Internet-of-Things) and V2V (vehicle-to-vehicle) are examined and related network service requirements such as user mobility, disruption tolerance, multi-homing, content/service addressability and context-aware delivery are identified. Drawing from our experience with the ongoing NSF-sponsored MobilityFirst future Internet architecture project, we outline a named-object protocol solution based on the GUID (Globally Unique Identifier) Service Layer which enables a clean separation of naming and addressing, and provides intrinsic support for a wide variety of mobility services. The talk concludes with a brief outline of the MobilityFirst proof-of-concept prototype currently being deployed on the GENI meso-scale networking testbed.
Dipankar Raychaudhuri is Distinguished Professor, Electrical & Computer Engineering and Director, WINLAB (Wireless Information Network Lab) at Rutgers University. As WINLAB’s Director, he is responsible for an internationally recognized industry-university research center specializing in wireless technology. He is also PI for several large U.S. National Science Foundation funded projects including the ORBIT wireless testbed and the MobilityFirst future Internet architecture.
Dr. Raychaudhuri has previously held corporate R&D positions including: Chief Scientist, Iospan Wireless (2000-01), AGM & Dept Head, NEC Laboratories (1993-99) and Head, Broadband Communications, Sarnoff Corp (1990-92). He obtained the B.Tech (Hons) from IIT Kharagpur in 1976 and the M.S. and Ph.D degrees from SUNY, Stony Brook in 1978, 79. He is a Fellow of the IEEE.
Cloud Radio Access Network (C-RAN) is a novel mobile
network architecture which can address a number of
challenges the operators face while trying to
support growing end user’s needs. The main idea
behind C-RAN is to pool the Baseband Units (BBUs)
from multiple base stations into centralized BBU
Pool for statistical multiplexing gain, while
shifting the burden to the high-speed wireline
transmission of In-phase and Quadrature (IQ)
data. C-RAN enables energy efficient network
operation and possible cost savings on baseband
resources. Furthermore, it improves network capacity
by performing load balancing and cooperative
processing of signals originating from several base
stations.
The next talk in the ML workgroup will be given by Olivier Grisel on scikit learn as a demo session.
It also possible to attend without a laptop and just follow what Olivier will present on the video projector.
Attendees who want to follow the demo in a hands on fashion, should install anaconda https://www.continuum.io/downloads
You can also git clone the following repository: https://github.com/ogrisel/notebooks There will be an update by Olivier Grisel the week of April 27, so be sure to update at the last minute!
Modeling and simulation (M&S) plays an important role in the design analysis and performance evaluation of complex systems. Many of these systems, such as computer networks, involve a large number of interrelated components and processes. Complex behaviors emerge as these components and processes inter-operate across multiple scales at various granularities. M&S must be able to provide sufficiently accurate results while coping with the scale and complexity.
My talk will focus on two novel techniques in high-performance network modeling and simulation. The first is a GPU-assisted hybrid network traffic modeling method. The hybrid approach offloads the computationally intensive bulk traffic calculations to the background onto GPU, while leaving detailed simulation of network transactions in the foreground on CPU. Our experiments show that the CPU-GPU hybrid approach can achieve significant performance improvement over the CPU-only approach.
The second technique is a distributed network emulation method based on simulation symbiosis. Mininet is a container-based emulation environment that can study networks consisted of virtual hosts and OpenFlow-enabled virtual switches on Linux. It is well-known, however, that experiments using Mininet may lose fidelity for large-scale networks and heavy traffic load. The proposed symbiotic approach uses an abstract network model to coordinate distributed Mininet instances with superimposed traffic to represent large-scale network scenarios.
Dr. Jason Liu is currently an Associate Professor at the School of Computing and Information Sciences, Florida International University (FIU). He received a B.A. degree from Beijing University of Technology in China in 1993, an M.S. degree from College of William and Mary in 2000, and a Ph.D. degree in from Dartmouth College in 2003. He was also a postdoctoral researcher at University of Illinois, Urbana-Champaign in 2003-2004 and also an Assistant Professor at Colorado School of Mines during 2004-2007. His research focuses on parallel simulation and performance modeling of computer systems and communication networks. He served both as General Chair and Program Chair for several conferences, and is currently on the steering committee of SIGSIM-PADS, and on the editorial board of ACM Transactions on Modeling and Computer Simulation (TOMACS) and SIMULATION, Simulation Transactions of the Society for Modeling and Simulation International. He is an NSF CAREER awardee (2006) and an ACM Distinguished Scientist (2014). His research has been funded by various US federal agencies, including NSF, DOE, DOD, and DHS.
We present a novel approach for distributed load balancing in heterogeneous networks that
use cell range expansion (CRE) for user association and almost blank subframe (ABS) for
interference management. First, we formulate the problem as a minimisation of an alpha−fairness
objective function with load and outage constraints. Depending on alpha, different objectives in
terms of network performance or fairness can be achieved. Next, we model the interactions
among the base stations for load balancing as a near-potential game, in which the potential
function is the alpha−fairness function. The optimal pure Nash equilibrium (PNE) of the game
is found by using distributed learning algorithms. We propose log-linear and binary log-linear
learning algorithms for complete and partial information settings, respectively. We give a
detailed proof of convergence of learning algorithms for a near-potential game. We provide
sufficient conditions under which the learning algorithms converge to the optimal PNE. By
running extensive simulations, we show that the proposed algorithms converge within few
hundreds of iterations. The convergence speed in the case of partial information setting is
comparable to that of the complete information setting. Finally, we show that outage can be
controlled and a better load balancing can be achieved by introducing ABS.
Joint work with Pierre Coucheney, and Marceau Coupechoux, in IEEE Transactions on Wireless Communications (Vol. 15, No. 7, 2016).
Base station cooperation is a promising scheme to improve network performance for next generation cellular networks. Up to this point research has focused on station grouping criteria based solely on geographic proximity. However, for the cooperation to be meaningful, each station participating in a group should have sufficient available resources to share with others. In this work we consider an alternative grouping criterion based on a distance that considers both geographic proximity and available resources of the stations. When the network is modelled by a Poisson Point Process, we derive analytical formulas on the proportion of cooperative pairs or single stations, and the expected sum interference from each of the groups. The results illustrate that cooperation gains strongly depend on the distribution of available resources over the network. Joint work with Anastasios Giovanidis, Philippe Martins, Laurent Decreusefond, to appear at SpaSWiN 2017.
In Software Defined Networking (SDN) the control plane is physically separate from the forwarding plane. Control software programs the forwarding plane (e.g., switches and routers) using an open interface, such as OpenFlow. SDN envisions smart centralized controllers governing the forwarding behaviour of dumb low-cost switches.
Even if this approach presents several benefits (scalability, vendor independence, easiness of deployment etc.) it is often limited by the continuous intervention of the logically centralized controller (de facto complex distributed systems) to take decisions just based on local states (versus network-wide knowledge), which could be in principle directly handled at wire speed inside the device itself. Stateful dataplane processing is recently emerged as a enabling technology to supersede this limitation and to deploy virtual network functions at wire-speed.
This talk will present Open Packet Processor (OPP), a stateful programmable dataplane abstraction that extend the OpenFlow match-action model adding the concept of a per-flow content that is used to represent the history of each flow travelling into the switch. OPP extracts a (configurable) flow-key from the incoming packet and update the flow content (and the associated forwarding decision) using some OpenFlow-like rules that define an Extended Finite State Machine (EFSM). The EFSM reads the current state, the value of the flow-registers and the content of the packet and decide the next state, update the flow-register values and set the action to apply to the packet.
The talk will show an high-speed hardware implementation of a OPP based switch realized on an FPGA board with 4x10 GbE ports and some use cases that leverage the features of the OPP abstraction. A comparison with the most relevant work in the field of stateful dataplane processing will be presented and compared with the OPP abstraction. Finally the talk will discuss the OPP limitations and extensions.
Salvatore Pontarelli received the master Degree at university of Bologna in 2000. In 2003 takes its PhD degree in microelectronics and telecommunications from the University of Rome Tor Vergata. Currently, he works as Researcher at CNIT (Italian Consortium of telecommunication), in the research unit of University of Rome Tor Vergata. In the past Dott. Pontarelli has worked with the National Research Council (CNR), the Department of Electronic Engineering of University of Rome Tor Vergata, the Italian Space Agency (ASI), the University of Bristol.
He also works as consultant for various Italian and International companies for design of hardware for high speed networking. He participates in several national and EU funded research program (ESA, FP7 and H2020). In 2011 he was recipient of a CISCO research Award for the study on the combined use of Bloom filters and Ternary CAM. His main research activities are hash based structures for networking applications, use of FPGA for high speed network monitoring, hardware design of software defined network devices, stateful programmable data planes.
Recent developments in communication technologies have brought significant changes to both our everyday lives and data collection technologies and possibilities as well. Nowadays, most of us carry sensors everywhere and leave digital records of our activities in potentially several large-scale databases. This presents novel opportunities for researchers, promising detailed data on social phenomena which was previously hard or even prohibitably expensive to collect, while also raises new concerns about privacy and commercial use of personal data. In this talk, I will present some of recent research to exploit these new possibilities I participated as a postdoctoral researcher at the Senseable City Lab at MIT and previously as a PhD student at the Eotvos Lorand University in Budapest, Hungary. The work I focus in this talk is based on data collected from Twitter, a microblogging service where a sample of geo-tagged messages posted by users can be accessed by researchers. In this work, we showed that some large-scale spatial differences in language are present in social media data and can be related to important demographic characteristics as observed in in census and survey data. Further, I show work where we showed that there is an important spatial structure in the social connections and link it to the concept of navigability, i.e. being able to find short paths between individuals in the social network based on information about spatial proximity.
Wi-Fi Direct is a popular wireless technology which is integrated in most of today’s smartphones and tablets. This technology allows a set of devices to dynamically negotiate and select a group owner which plays the role of access point. We demonstrate that Wi-Fi Direct-based Device-to-Device (D2D) communications can be used to offload Wi-Fi access points in a dense wireless network. Clustering, power control and transmission scheduling techniques are applied to optimize the network performances and reduce the file download time up to 30%.
Dr. Mai-Trang Nguyen is Associate Professor at University Pierre and Marrie Curie (Paris 6), France. She has received her PhD degree in Computer Science in 2003 from University of Paris 6, in collaboration with Telecom-ParisTech. In 2004 and 2005, she was postdoctoral researcher in France Telecom and University of Lausanne, Switzerland, respectivevly. She has been involved in the European FP7 4WARD project working on the future Internet and the FP7 MobileCloud project. Her research interests include Internet architecture, SDN/NVF, multihoming, cognitive radio, network coding, data analytics and D2D communications.
The talk starts by presenting the current approaches for a QoE-aware service management in the Internet, i.e., application oriented or network management, which are mostly characterized by the tools available by the stakeholders implementing the process and also by their own interests. Then, the current limits and issues of these approaches are highlighted, together with the current directions of development in the field. These are mostly characterized by a strong evolution towards the virtualization of the services through the introduction of the SDN and NFV paradigms. These allow for a more flexible management of the services with a better control of the QoE and for a potential stronger cooperation among the different actors involved in the service provisioning chain.
Luigi Atzori (SM’09) is Associate Professor at the Department of Electrical and Electronic Engineering at the University of Cagliari (Italy), where he leads the laboratory of Multimedia and Communications with around 15 affiliates (http://mclab.diee.unica.it). L. Atzori research interests are in multimedia communications and computer networking (wireless and wireline), with emphasis on multimedia QoE, multimedia streaming, NGN service management, service management in wireless sensor networks, architecture and services in the Internet of Things. He is the coordinator of the Marie Curie Initial Training Network on QoE for multimedia services (http://qoenet-itn.eu), which involves ten European Institutions in Europe and one in South Korea. He has been the editor for the ACM/Springer Wireless Networks Journal and guest editor for many journals, including the IEEE Communications Magazine, the Springer Monet and the Elsevier Signal Processing: Image Communications Journals. He is member of the steering committee for the IEEE Trans. on Multimedia, member of the editorial board of the IEEE IoT, the Elsevier Ad Hoc Networks and the Elsevier Digital Communications and Networks journals. He served as a technical program chair for various international conferences and workshops. He served as a reviewer and panelist for many funding agencies, including H2020, FP7, Cost and Italian MIUR.
As computing services are increasingly cloud-based, corporations are investing in cloud-based security measures. The Security-as-a-Service (SECaaS) paradigm allows customers to outsource security to the cloud, through the payment of a subscription fee. However, no security system is bulletproof, and even one successful attack can result in the loss of data and revenue worth millions of dollars. To guard against this eventuality, customers may also purchase cyber insurance to receive recompense in the case of loss. To achieve cost effectiveness, it is necessary to balance provisioning of security and insurance, even when future costs and risks are uncertain. This presentation introduces a stochastic optimization model to optimally provision security and insurance services in the cloud. Since the model is a mixed integer problem, we also introduce a partial Lagrange multiplier algorithm that takes advantage of the total unimodularity property to find the solution in polynomial time. We show the effectiveness of these techniques using numerical results based on real attack data to demonstrate a realistic testing environment, and find that security and insurance are interdependent.
Dusit Niyato is currently an associate professor in the School of Computer Science and Engineering, at the Nanyang Technological University, Singapore. He received B.E. from King Mongkuk’s Institute of Technology Ladkrabang (KMITL), Thailand in 1999 and Ph.D. in Electrical and Computer Engineering from the University of Manitoba, Canada in 2008. He has published more than 300 technical papers in the area of wireless and mobile networking and authored the books "Resource Management in Multi-Tier Cellular Wireless Networks", "Game Theory in Wireless and Communication Networks: Theory, Models, and Applications" and "Dynamic Spectrum Access and Management in Cognitive Radio Networks". He won the Best Young Researcher Award of IEEE Communications Society (ComSoc) Asia Pacific (AP) and The 2011 IEEE Communications Society Fred W. Ellersick Prize Paper Award. He is a distinguished lecturer of the IEEE Communications Society. His works have received more than 13,000 citations (Google Scholar).
Currently, he serves as an area editor of IEEE Transactions on Wireless Communications (Radio Management and Multiple Access), an associate editor of IEEE Transactions on Communications, an editor of IEEE Communications Surveys and Tutorials (COMST), and IEEE Transactions on Cognitive Communications and Networking (TCCN). He was a guest editor of IEEE Journal on Selected Areas on Communications, special issue on Cognitive Radio Networking & Communications, and Recent Advances in Heterogeneous Cellular Networks. He is a Fellow of IEEE.
After many years of constant evolution, the Internet has approached a historic inflection point where mobile platforms, applications and services are poised to replace the fixed-host/server model that has dominated the Internet since its inception. Driven by the strikingly different Internet population of mobile devices and services, new fundamental communication abstractions are required and the current IP based Internet fails to meet their requirement in a satisfying fashion. Starting from these key considerations, this talk aims to introduce the audience to a new approach to networking, that centers around the central architectural concept of Named-Object based networking and the power that lies behind it. Looking at the different architectures presented over the years, a set of new fundamental abstractions are defined, providing a comprehensive analysis of their properties and how they could be met. This study leads to the presentation of the MobilityFirst architecture in which the "narrow waist" of the protocol stack is based on Named-Objects which enable a broad range of capabilities in the network.
As an example of the potential of the Named-Object abstraction, an analysis of how advanced cloud services can be supported in the proposed architecture is presented. In particular, the concept of naming is extended to natively support virtual network identifiers. It is shown that the virtual network capability can be designed by introducing the concept of a "Virtual Network Identifier (VNID)" which is managed as a Named-Object. Further, the design supports the concept of Application Specific Routing (ASR) which enables network routing decisions to be made with awareness of application parameters such as cloud server workload. Experimental results show that the new framework provides a clean and simple logic for defining and managing virtual networks while limiting the performance impact produced by the additional overhead generated by running such system. Moreover, using a prototype of the architecture deployed on a nation-wide testbed, the potential of ASR is demonstrated in a cloud service scenario.
In many applications such as cloud computing, managing server farm resources etc. an incoming task
or job has to be matched with an appropriate server in order to minimise the latency associated with
the processing. Ideally the best choice would be to match a job to the fastest available server. However
when there are thousands of servers obtaining the information on server tasks is expensive.
Pioneered in the 1990’s the idea of randomised sampling of a few servers was proposed by Vvedenskaya and Dobrushin in Russia and Mitzmenmacher in the US and popularised as the "power
of two" schemes which basically means that sampling two servers randomly and sending the job to
the "better" server (i.e. with the shortest queue, or most resources) provides most of the benefits of
sampling all the servers.
In the talk I will discuss multi-server loss models under power-of-d routing scheme when service
time distributions are general with finite mean. Previous works on these models assume that the service times are exponentially distributed and insensitivity was suggested through simulations. Showing
insensitivity to service time distributions has remained an open problem. We address this problem
by considering service time distributions as Mixed-Erlang distributions that are dense in the class
of general distributions on (0, infty). We derive the mean field equations (MFE) of the empirical distributions for the system and establish the existence and uniqueness of the fixed point of the MFE.
Furthermore we show that the fixed point of the MFE corresponds to the fixed point obtained from
the MFE corresponding to a system with exponential service times showing that the fixed point is
insensitive to the distribution. We provide numerical evidence of the global asymptotic stability of
the fixed point, which would then imply that the fixed point is indeed the stationary distribution.
We conclude with a brief discussion of the case of the MFE with general service times showing
that the MFE is now characterized by a pde whose stationary point coincides with the fixed point
in the case with exponential service times.The techniques developed in this paper are applicable to
study mean field limits for Markov processes on general state spaces and insensitivity properties of
other queueing models.
The speaker was educated at the Indian Institute of Technology, Bombay (B.Tech, 1977), Imperial College, London (MSc, DIC, 1978) and obtained his PhD under A. V. Balakrishnan at UCLA in 1983.
He is currently a University Research Chair Professor in the Dept. of ECE at the University of Waterloo,
Ont., Canada where he has been since September 2004. Prior to this he was Professor of ECE at Purdue
University, West Lafayette, USA. He is a D.J. Gandhi Distinguished Visiting Professor at the Indian Institute of Technology, Bombay. He is a Fellow of the IEEE and the Royal Statistical Society. He is a recipient of the Best Paper Awards at INFOCOM 2006, the International Teletraffic Congress 2015, Performance 2015, and was runner-up for the Best Paper Award at INFOCOM 1998.
His research interests are in modeling, control, and performance analysis of both wireline and wireless
networks, and in applied probability and stochastic analysis with applications to queueing, filtering, and optimization.
We present a strategic investment framework for mobile TV delivery by two potential operators: broadcaster (DVB) and cellular (MNO), in the presence of demand and operating cost uncertainties. We consider two settings: a cooperative one where the two operators run a convergent hybrid network, and a competitive one in which each operator builds its own network, if it decides to enter the market. We define a real option game theoretic framework and propose a new bi-level dynamic programming algorithm that solves the optimal profit maximization problem and yields the optimal investment decisions of both players.
Although symmetric ciphers may provide strong computational security, a key leakage makes the encrypted data vulnerable. In a distributed storage environment, reinforcement of data protection consists of dispersing data over multiple servers in a way that no information can be obtained from data fragments until a defined threshold of them has been collected. A secure fragmentation is usually enabled by secret sharing, information dispersal algorithms or data shredding. However, these solutions suffer from various limitations, like additional storage requirement or performance burden. Therefore, we introduce a novel flexible keyless fragmentation scheme, balancing memory use and performance with security. It could be applied in many different contexts, such as dispersal of outsourced data over one or multiple clouds or in resource-restrained environments like sensor networks.
This talk will present the FUTEBOL project, which is a joint EU-Brazil
project focused on the creation of experimental testbeds. The
project goal is to allow the access to advanced experimental
facilities in Europe and Brazil for research and education across the
wireless and optical domains. To accomplish this, we will develop a
converged control framework to support optical/wireless
experimentation on the federated research infrastructure from all
associated partners/institutions. Further, industry-driven use cases
will be demonstrated using this testbed to produce advances in
research at the optical/wireless boundary. The talk will provide an
overview of the project, presenting the proposed testbeds as well as
the experiments that we will perform to showcase the importance of
optical/wireless integration on today networks. Those use cases
involve topics such as C-RAN and radio over fiber and low latency
networking for real-time applications on IoT and immersive services.
20170201b
14h30-15h00 @
LINCS, Seminar Room
Cunha, Italo
(Universidade Federal de Minas Gerais, Brazil)
PEERING: An AS for Us
PEERING is a testbed that provides safe and easy access for researchers to the Internet’s BGP routing system, enabling and inspiring transformational research. Traditionally, the barriers to conduct Internet routing experiments hindered progress. Most research on interdomain routing is either based on passive observation of existing routes, which cannot capture how the Internet will respond to changes in protocols or policies, or based on simulations, whose fidelity is restricted by known limitations in our understanding of Internet topology and policy. To move beyond these limited experimental approaches, the PEERING testbed connects (via BGP) with real networks at universities and Internet exchange points around the world. Instead of being observers of the Internet ecosystem, researchers become participants, running experiments that announce/select routes and send/receive traffic directly with these networks. In this talk, we present an introduction to the testbed and a sample of the research it has enabled.
"Most unwritten languages today have no known grammar, and are rather governed by "unspoken rules". Similarly, we think that the young discipline of networking is still a practice that lacks a deep understanding of the rules that govern it. This situation results in a loss of time and efforts. First, since the rules are unspoken, they are not systematically reused. Second, since there is no grammar, it is impossible to assert if a sentence is correct. Comparing two networking approaches or solutions is sometimes a synonym of endless religious debates. Drawing the proper conclusion from this claim, we advocate that networking research should spend more efforts on better understanding its rules as a first step to automatically reuse them. To illustrate our claim, we focus in this work on one broad family of networking connectivity problems. We show how different instances of this problem, which were solved in parallel with no explicit knowledge reuse, can be derived from a small set of facts and rules implemented in a knowledge-based system."
The notion of performance seems crucial to many fields of science and engineering. Some scientists are concerned with the performance of algorithms or computing hardware, others evaluate the performance of communication channels, of materials, of chemicals, psychologists investigate the performances of experimental participants, etc. Yet, rather surprisingly, the performance notion is generally used without any explicit definition. I propose this: a performance is a quantitative behavioral measure that an agent deliberately tries to either minimize or maximize. The scope of the performance concept seems impressively large: the agent whose behavioral performance is being scored may be a human, a coalition of humans (e.g., an enterprise, an academic institution), or even a human product (e.g., a chemical, an algorithm, a market share).
Performance measures are random variables of a very special sort: their distributions are strongly skewed as a direct consequence of the extremization (minimization or maximization) pressure that constitutes their defining characteristic. Mainstream statistics takes it for granted that any distribution needs to be summarized by means of some representative central-tendency indicator (e.g., an arithmetic mean, a median), and so the asymmetry of performance distributions has been traditionally considered an unfortunate complication. In fact I will argue that when it comes to statistics of performance, averages become essentially irrelevant. The point is easy to make with the example of spirometry testing: practitioners of spirometry never compute an average, they retain the best measure of respiratory performance (i.e., the sample max) and flatly discard all other measures. And they are quite right to do so, as a simple model will help explain. The important general lesson to be learned from spirometry is that the better a sample measure of performance, the more valid as an estimate of the capacity of performance—the theoretical upper limit whose estimation is in fact the goal of most performance testing in experimental science. One likely reason why experimenters of many fields have recourse to the measurement of performances is because non-extremized behavior tends to be random, whereas performance capacities can abide by quantitative laws. This is easy to illustrate with empirical data from human experimental psychology (Hick’s law, George Miller’s magic number, Fitts’ law). An experimenter myself, I can only conclude with a question to statisticians: i.e., don’t we need a brand new sort of statistics to fully acknowledge and accommodate the rather special nature of all these performance measures that these days we encounter everywhere, not just in virtually every sector of society but also in many field of scientific research
The tolerance to faults and Byzantine behaviors has a long tradition in distributed computing.
Now, more than ever, fault-tolerant and Byzantine tolerant distributed computing finds its application in various emergent
areas raging from sensor networks to distributed storage and blockchains.
This talk presents recent results on how distributed computing deals with various incarnations of faults and Byzantine
behaviors with case studies taken from body area networks, mobile robots networks, distributed storage and blockchains.
Connected devices, as key constituent elements of the Internet of Things (IoT), are flooding our real world environment. This digital wave paves the way for a major technological breakthrough called to deeply change our daily lives. Nevertheless, some strong issues remain to be addressed. The most dominant one relates to our ability to leverage the whole IoT service space and, more specifically, to our ability to compose IoT services from multiple connected devices by cleverly selecting them with the required software functions, whatever our technical skills. In such a challenging context, we first propose a rich and flexible abstraction framework relying on Attributed Typed Graphs, which enables to represent how known IoT services are composed from different perspectives. Then, capitalizing on this modeling tool and focusing on the way IoT services interact with the physical environment, lightweight service signatures are computed by using a physical-interfaced-based algorithm in order to characterize IoT services. Finally, we discuss how leveraging the computed signatures can allow for autonomously recommending IoT services to end-users.
Identifying causal (rather than merely correlative)
relationships in physical systems is a difficult task, particularly if
it is not feasible to perform controlled experiments. Granger’s
notion of causality was developed first in economics beginning in the
1960s and can be used to form a network of "plausible causal
relations" given only the opportunity to observe the system. This
method is applied, for example, in neuro-imaging to identify
relationships amongst brain regions, and in biostatistics to explore
gene regulatory networks. In this talk, we provide an overview of the
notion of Granger Causality, some methods for learning Granger
Causality Networks in practice, and our current directions for
research. (Provide anonymous feedback at https://www.surveymonkey.com/r/YVQJ99X )
In a competitive setting, we consider the problem faced by a firm that makes decisions concerning both the location and service levels of its facilities, taking into account that users patronize the facility that maximizes their individual utility, expressed as the sum of travel time, queueing delay, and a random term. This situation can be modelled as a mathematical program with equilibrium constraints that involves discrete and continuous variables, as well as linear and nonlinear functions. This program is reformulated as a standard bilevel program that can be approximated, through the linearization of the nonlinear functions involved, as a mixed integer linear program that yields quasi-optimal’ solutions. Since this approach does not scale well, we have in parallel developed heuristic procedures that exploit the very structure of the problem. Based on theoretical and computation results pertaining to this application, we will discuss further developments in the area of nonlinear facility location.
Temporal collective profiles generated by mobile network users can be used to predict network usage, which in turn can be used to improve the performance of the network to meet user demands. This presentation will talk about a prediction method of temporal collective profiles which is suitable for online network management. Using weighted graph representation, the target sample is observed during a given period to determine a set of neighboring profiles that are considered to behave similarly enough. The prediction of the target profile is based on the weighted average of its neighbors, where the optimal number of neighbors are selected through a form of variable neighborhood search. This method is applied to two datasets, one provided by a mobile network service provider and the other from a Wi-Fi service provider. The proposed prediction method can conveniently characterize user behavior via graph representation, while outperforming existing prediction methods. Also, unlike existing methods that utilize categorization, it has a low computational complexity, which makes it suitable for online network analysis.
Understanding network health is essential to improving Internet reliability. For instance, detecting disruptions in peer and provider networks identifies fixable connectivity problems. Currently this task is time consuming as it involves a fair amount of manual observation because operators have little visibility into other networks.
Here we leverage existing public RIPE Atlas measurement data to monitor and analyze network conditions; creating no new measurements. We demonstrate a set of complementary methods to detect network disruptions using traceroute measurements. A novel method of detecting changes in delay is used to identify congested links, and a packet forwarding model is employed to predict traffic paths and to identify faulty routers and links in cases of packet loss. In addition, aggregating results from each method allows us to easily monitor a network and identify coordinated reports manifesting significant network disruptions, reducing uninteresting alarms.
Our contributions consist of a statistical approach to providing robust estimation or Internet delays and the study of hundreds of thousands link delays. We present three cases demonstrating that the proposed methods detect real disruptions and provide valuable insights, as well as surprising findings, on the location and impact of identified events.
Community detection is a fundamental problem in the field of graph mining. The objective is to find densely connected clusters of nodes, so-called communities, possibly overlapping. While most existing algorithms work on the entire graph, it is often irrelevant in practice to cluster all nodes. A more practically interesting problem is to detect the community to which a given set of nodes, the so-called "seed nodes", belong. Moreover, the exploration of the whole network is generally computationally expensive, if not impossible, and algorithms that only take into account the local structure of the graph around seed nodes provide a big advantage. For these reasons, there is a growing interest in the problem of "local" community detection, also known as "seed set expansion". We solve this problem through a low-dimensional embedding of the graph based on random walks starting from the seed nodes.
Recently we are in the middle of structural changes toward software-defined ICT infrastructure, which attempts to transform the existing silo-based infrastructure into futuristic composable one by integrating IoT-based smart/mobile things, SDN-coordinated interconnect edges, and NFV-assisted and software-driven cloud core. This end-user-driven infrastructure transform can be supported by diverse open-source community projects (e.g., Linux Foundation’s OVS/OpenSwitch/ONOS/CORD/ODL/OPNFV/Open-O/..., Facebook-initiated OCP, and others). Aligning with this upcoming transition, in this talk, the prototyping experience of OF@KOREN & OF@TEIN SmartX playgrounds will be shared by focusing on the hyper-convergent SmartX Boxes. Then, for multisite edge clouds, the on-going design trials of affordable SmartX K-Cluster will be explained. Finally, by leveraging DevOps-based automation, preliminary prototyping for IoT-Cloud services will be discussed by taking an example service scenario for smart energy.
We have examined maximum vertex coloring of random geometric graphs, in an arbitrary but fixed dimension, with a constant number of colors, in a recent work with S. Borst. Since this problem is neither scale-invariant nor smooth, the usual methodology to obtain limit laws cannot be applied. We therefore leverage different concepts based on subadditivity to establish convergence laws for the maximum number of vertices that can be colored. For the constants that appear in these results, we have provided the exact value in dimension one, and upper and lower bounds in higher dimensions.
In an ongoing work with B. Blaszczyszyn, we study the distributional properties of maximum vertex coloring of random geometric graphs. Moreover, we intend to generalize the study over weakly-μ-sub-Poisson processes.
The Jupyter Notebook (http://jupyter.org) is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more. In this hands-on talk, we will learn how to use a notebook, how to use plugins (as well as a set of useful ones), and how to make online-presentations (such as http://www.lincs.fr/wp-content/uploads/2013/01/04-Power-Law-Course.html and
https://www.lincs.fr/wp-content/uploads/2016/10/kleinberg.html)
We consider a scenario where an Internet Service Provider (ISP) serves users that choose digital content among M Content Providers (CP). In the status quo, these users pay both access fees to the ISP and content fees to each chosen CP; however, neither the ISP nor the CPs share their profit. We revisit this model by introducing a different business model where the ISP and the CP may have motivation to collaborate in the framework of caching. The key idea is that the ISP deploys a cache for a CP provided that they share both the deployment cost and the additional profit that arises due to caching. Under the prism of coalitional games, our contributions include the application of the Shapley value for a fair splitting of the profit, the stability analysis of the coalition and the derivation of closed-form formulas for the optimal caching policy.
Our model captures not only the case of non-overlapping contents among the CPs, but also the more challenging case of overlapping contents; for the latter case, a non-cooperative game among the CPs is introduced and analyzed to capture the negative externality on the demand of a particular CP when caches for other CPs are deployed.
Joint work with S. Elayoubi, E. Altman, and Y. Hayel to be presented at the 10th EAI International Conference on Performance Evaluation Methodologies and Tools (Valuetools 2016). The full version of the paper has been selected to be published in a special issue of the Elsevier journal of Performance Evaluation (PEVA).
Bootstrap percolation is a well-known activation process in a graph,
in which a node becomes active when it has at least r active neighbors.
Such process, originally studied on regular structures, has been recently
investigated also in the context of random graphs, where it can serve as a simple
model for a wide variety of cascades, such as the
spreading of ideas, trends, viral contents, etc. over large social networks.
In particular, it has been shown that in G(n,p) the final active set
can exhibit a phase transition for a sub-linear number of seeds.
In this paper, we propose a unique framework to study similar
sub-linear phase transitions for a much broader class of graph models
and epidemic processes. Specifically, we consider i) a generalized version
of bootstrap percolation in G(n,p) with random activation thresholds
and random node-to-node influences; ii) different random graph models,
including graphs with given degree sequence and graphs with
community structure (block model). The common thread of our work is to
show the surprising sensitivity of the critical seed set size
to extreme values of distributions, which makes some systems dramatically
vulnerable to large-scale outbreaks. We validate our results running simulation on
both synthetic and real graphs. Joint work with M. Garetto and G. Torrisi, appeared at ACM SIGMETRIC 2016.
The Web is the largest public big data repository that humankind has
created. In this overwhelming data ocean, we need to be aware of the
quality and, in particular, of the biases that exist in this data. In
the Web, biases also come from redundancy and spam, as well as from
algorithms that we design to improve the user experience. This problem
is further exacerbated by biases that are added by these algorithms,
specially in the context of search and recommendation systems. They
include selection and presentation bias in many forms, interaction bias,
social bias, etc. We give several examples and their relation to sparsity
and privacy, stressing the importance of the user context to avoid these
biases.
Ricardo Baeza-Yates areas of expertise are web search and data mining,
information retrieval, data science and algorithms. He is CTO of NTENT, a semantic search technology company. Before he was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from January 2006 to February 2016. He also is part time Professor at DTIC of the Universitat Pompeu Fabra, in Barcelona, Spain, as well as at DCC of Universidad de Chile in Santiago. Until 2004 he was Professor and founding director of the Center for Web Research at the later place. He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the board of governors of the IEEE Computer Society and in 2012 he was elected for the ACM Council. Since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow, among other awards and distinctions.
Genomic computing is a new science focused on understanding the
functioning of the genome, as a premise to fundamental discoveries in
biology and medicine. Next Generation Sequencing (NGS) allows the
production of the entire human genome sequence at a cost of about 1000
US $; many algorithms exist for the extraction of genome features, or
"signals", including peaks (enriched regions), mutations, or gene
expression (intensity of transcription activity). The missing gap is a
system supporting data integration and exploration, giving a biological
meaning to all the available information; such a system can be used,
e.g., for better understanding cancer or how environment influences
cancer development.
The GeCo Project (Data-Driven Genomic Computing, ERC Advanced Grant
currently undergoing the contract preparation) has the objective or
revisiting genomic computing through the lens of basic data management,
through models, languages, and instruments; the research group of DEIB
is among the few which are centering their focus on genomic data
integration. Starting from an abstract model, we already developed a
system that can be used to query processed data produced by several
large Genomic Consortia, including Encode and TCGA; the system employs
internally the Spark, Flink, and SciDB data engines, and prototypes can
already be accessed from Cineca servers or be downloaded from PoliMi
servers. During the five-years of the ERC project, the system will be
enriched with data analysis tools and environments and will be made
increasingly efficient.
Most diseases have a genetic component, hence a system which is capable
of integrating big data of genomics is of paramount importance. Among
the objectives of the project, the creation of an open source system
available to biological and clinical research; while the GeCo project
will provide public services which only use public data (anonymized and
made available for secondary use, i.e., knowledge discovery), the use of
the GeCo system within protected clinical contexts will enable
personalized medicine, i.e. the adaptation of therapies to specific
genetic features of patients. The most ambitious objective is the
development, during the 5-years ERC project, of an Internet for
Genomics, i.e. a protocol for collecting data from Consortia and
individual researchers, and a Google for Genomics, supporting indexing
and search over huge collections of genomic datasets.
Vaucanson-R is a software platform writen essentially in
C++ (and python) for the manipulation of finite automata and
transducers in a very general setting. It is the last generation of a
series of libraries started in 2001. Its philosophy comes from this
long experience and is threefold: efficiency, genericity and
accessibility.
The platform indeed provides different access-points (generic C++, C++,
python, command-line program) depending on one’s knowledge in
programming. It is indeed easy to devise and/or execute simple
algorithms on standard (boolean) automata and get a visual feedback of
the result. On the other hand, it is also possible to write efficient
and generic programs that will work on weighted automata, for many
kinds of weighted semirings.
In this presentation, we will show how to use the python and
command-line layers interactively: building automata, executing
standard algorithms, etc. We will then give a few hints on how to go
further.
Joint work with Sylvain Lombardy (Bordeaux), Nelma Moreira (Porto),
Rogrio Reis (Porto), and Jacques Sakarovitch.
Since the beginning, the Vaucanson project has been supported by the
InfRes department and LTCI at Telecom ParisTech. It has been also
also supported by an ANR Project (2011-2014). Until 2014, Vaucanson
has been developed as a joint project with the LRDE at EPITA (Akim
Demaille, Alexandre Duret-Lutz and their students).
Victor Marsault defended in 2016 his thesis in computer
science at Tlcom-Paristech. He will hold a post-doctoral position
in the University of Lige, Belgium starting in October.
Motivated by community detection, we characterise the spectrum of the non-backtracking matrix B in the Degree-Corrected Stochastic Block Model.
Specifically, we consider a random graph on n vertices partitioned into two equal-sized clusters. The vertices have i.i.d. weights { \phi_u }_u=1^n with second moment \PHItwo. The intra-cluster connection probability for vertices u and v is \phi_u \phi_v a/b and the inter-cluster connection probability is \phi_u \phi_v b/n.
We show that with high probability, the following holds: The leading eigenvalue of the non-backtracking matrix B is asymptotic to ρ= (a+b)/2 \PHItwo. The second eigenvalue is asymptotic to \mu_2 = (a+b)/2 \PHItwo when \mu_2^2 > ρ, but asymptotically bounded by \sqrtρ when \mu_2^2 ≤ρ. All the remaining eigenvalues are asymptotically bounded by \sqrtρ. As a result, a clustering positively-correlated with the true communities can be obtained based on the second eigenvector of B in the regime where \mu_2^2 > ρ.
In a previous work we obtained that detection is impossible when \mu_2^2 < ρ, meaning that there occurs a phase-transition in the sparse regime of the Degree-Corrected Stochastic Block Model.
As a corollary, we obtain that Degree-Corrected Erdos-Renyi graphs asymptotically satisfy the graph Riemann hypothesis, a quasi-Ramanujan property.
A by-product of our proof is a weak law of large numbers for local-functionals on Degree-Corrected Stochastic Block Models, which could be of independent interest.
Network Function Virtualization (NFV) is an emerging approach that has received attention from both academia and industry as a way to improve flexibility, efficiency, and manageability of networks. NFV enables new ways to operate networks and to provide composite network services, opening the path toward new business models. As in cloud computing with the Infrastructure as a Service model, clients will be offered the capability to provision and instantiate Virtual Network Functions (VNF) on the NFV infrastructure of the network operators. In this paper, we consider the case where leftover VNF capacities are offered for bid. This approach is particularly interesting for clients to punctually provision resources to absorb peak or unpredictable demands and for operators to increase their revenues. We propose a game theoretic approach and make use of Multi-Unit Combinatorial Auctions to select the winning clients and the price they pay. Such a formulation allows clients to express their VNF requests according to their specific objectives. We solve this problem with a greedy heuristic and prove that this approximation of economic efficiency is the closest attainable in polynomial time and provides a payment system that motivates bidders to submit their true valuations. Simulation results show that the proposed heuristic achieves a market valuation close to the optimal (less than 10 % deviation) and guarantees that an important part of this valuation is paid as revenue to the operator. Joint work with Jean-Louis Rougier, Luigi Iannone, Mathieu Bouet and Vania Conan, to appear at ITC28 https://itc28.org/
In-network caching is an appealing solution to cope with the increasing bandwidth demand of video, audio and data transfer over the Internet. Nonetheless, an increasing share of content delivery services adopt encryption through HTTPS, which is not compatible with traditional ISP-managed approaches like transparent and proxy caching. This raises the need for solutions involving both Internet Service Providers (ISP) and Content Providers (CP): by design, the solution should preserve business-critical CP information (e.g., content popularity, user preferences) on the one hand, while allowing for a deeper integration of caches in the ISP architecture (e.g., in 5G femto-cells) on the other hand. In this paper we address this issue by considering a content-oblivious ISP-operated cache. The ISP allocates the cache storage to various content providers so as to maximize the bandwidth savings provided by the cache: the main novelty lies in the fact that, to protect business-critical information, ISPs only need to measure the aggregated miss rates of the individual CPs. We propose a cache allocation algorithm based on a perturbed stochastic subgradient method, and prove that the algorithm converges to the allocation that maximizes the overall cache hit rate. We use extensive simulations to validate the algorithm and to assess its convergence rate under stationary and non-stationary content popularities. Our results (i) testify the feasibility of content-oblivious caches and (ii) show that the proposed algorithm can achieve within 15% from the global optimum in our evaluation. Joint work with Gyorgy Dan and Dario Rossi, to appear at ITC28 https://itc28.org/
We present a general framework for understanding system intelligence, i.e., the level of system smartness perceived by users, and propose a novel metric for measuring intelligence levels of dynamical systems, defined to be the maximum average reward obtained by proactively serving user demands, subject to a resource constraint. We provide an explicit characterization of the system intelligence, and show that it is jointly determined by user demand volume (opportunity to impress), demand correlation (user predictability), and system resource and action costs (flexibility to pre-serve). We then propose an online learning-aided control algorithm called Learning-aided Budget-limited Intelligent System Control (LBISC). We show that LBISC achieves an intelligence that is within O(N(T)^1/2 + ) of the highest level, where N(T) represents the number of data samples collected within a learning period T and is proportional to the user population size in the system. Moreover, we show that LBISC possesses a much faster convergence time compared to non-learning based algorithms. The analysis of LBISC rigorously quantifies the impacts of data and user population, learning, and control on achievable system intelligence, and provides novel insight and guideline into designing future smart systems.
The future of social networking is in the mobile world. Future network services are expected to center around human activity and behavior. Wireless networks (including ad hoc, sensor networks and DTNs) are expected to grow significantly and accommodate higher levels of mobility and interaction. In such a highly dynamic environment, networks need to adapt efficiently (performance-wise) and gracefully (correctness and functionality-wise) to growth and dynamics in many dimensions, including behavioral and mobility patterns, on-line activity and load. Understanding and realistically modeling this multi-dimensional space is essential to the design and evaluation of efficient protocols and services of the future Internet.
This level of understanding to drive the modeling and protocol design shall be developed using data-driven paradigm. The design philosophy for the proposed paradigm is unique in that it begins by intensive analysis of measurements from the target contexts, which then drive the modeling, protocol and service design through a systematic framework, called TRACE. Components of TRACE include: 1. Tracing and monitoring of behavior, 2. Representing and Analyzing the data, 3. Characterizing behavioral profiles using data mining and clustering techniques, and finally 4. Employing the understanding and insight attained into developing realistic models of mobile user behavior, and designing efficient protocols and services for future mobile societies.
Tracing at a large scale represents the next frontier for sensor networks (sensing the human society). Our latest progress in that field (MobiLib) shall be presented, along with data mining and machine learning tools to meaningfully analyze the data. Several challenges will be presented and novel use of clustering algorithms will be provided. Major contributions to modeling of human mobility; the time variant community model, TVC and Community Mobility (COBRA) will also be discussed. In addition, a novel framework for measuring vehicular mobility at planet scale, using thousands of webcams around the world, shall be presented.
Insights developed through analysis, mining and modeling will be utilized to introduce and design a novel communication paradigm, called profile-cast, to support new classes of service for interest-aware routing and dissemination of information, queries and resource discovery, trust and participatory sensing (crowd sourcing) in future mobile networks. Unlike conventional - unicast, multicast or directory based - paradigms, the proposed paradigm infers user interest using implicit behavioral profiling via self-monitoring and mining techniques. In order to capture interest, a spatio-temporal representation is introduced to capture users behavioral-space. Users can identify similarity of interest based on their position in such space.
The proposed profile-cast paradigm will act as enabler to new classes of service, ranging from mobile social networking, and navigation of mobile societies and spaces, to computational health care, mhealth (mobile health), emergency management and education, among others. The ideas of similarity-based support groups will be specifically highlighted for potential applications in disease-self management, collaborative education, and emergency response.
Dr. Ahmed Helmy is a Professor and Graduate Director at the Computer and Information Science and Engineering (CISE) Department at the University of Florida (UF). He received his Ph.D. in Computer Science 1999 from the University of Southern California (USC), M.Sc. in Electrical Engineering (EE) 1995 from USC, M.Sc. in Engineering Mathematics in 1994 and B.Sc. in Electronics and Communications Engineering 1992 from Cairo University, Egypt. He was a key researcher in the Network Simulator NS-2 and Protocol-Independent Multicast (PIM) projects at USC/ISI from 1995 to 1999. Before joining UF in 2006, he was on the Electrical Engineering-Systems Department faculty at USC starting Fall 1999, where he founded and directed the Wireless and Sensor Networks Labs.
In 2002, he received the NSF CAREER Award for his research on resource discovery and mobility modeling in large-scale wireless networks (MARS). In 2000 he received the Zumberge Award, and in 2002 he received the best paper award from the IEEE/IFIP MMNS Conference. In 2003 he was the Electrical Engineering nominee for the USC Engineering Jr. Faculty Research Award, and a nominee for the Sloan Fellowship. In 2004 and 2005 he got the best faculty merit ranking at the Electrical Engineering department at USC. He was a winner in the ACM MobiCom 2007, a finalist in 2008 SRC competitions, a 2nd place winner in ACM MobiCom WiNTECH demo competition 2010, and a finalist/runner-up in the 2012 ACM MobiCom SRC competition. In ’13 he won the best paper award from ACM SIGSPATIAL IWCTS. In 2014 he won the Epilepsy Foundation award for innovation, and the ACM MobiCom Mobile App Competition (1st place) and startup pitch competition (2nd place). In 2015 he won the Internet Technical Committee (ITC) best paper award by for seven IEEE ComSoc conferences/symposia of 2013. He is leading (or has led) several NSF funded projects including MARS, STRESS, ACQUIRE, AWARE and MobiBench.
His research interests include design, analysis and measurement of wireless ad hoc, sensor and mobile social networks, mobility modeling, multicast protocols, IP mobility and network simulation. He has published over 150 journal articles, conference papers and posters, book chapters, IETF RFCs and Internet drafts. His research is (or has been) supported by grants from NSF, KACST, Aalto University, USC, Intel, Cisco, DARPA, NASA, Nortel, HP, Pratt & Whitney, Siemens and SGI. He has over 12,200 citations with H-index=48 (Google Scholar).
Dr. Helmy is an editor of the IEEE Transactions on Mobile Computing (TMC), an area editor of the Ad hoc Networks Journal - ElSevier (since 2004), and an area editor of the IEEE Computer (since 2010). He was the finance chair of ACM MobiCom ’13, co-chair of ACM MobiSys HotPlanet’12, program co-chair for ACM MSWiM 2011, and ACM MobiCom CHANTS workshop 2011, co-chair of AdhocNets 2011, honorary program chair of IEEE/ACM IWCMC 2011, general chair of IWCMC 2010, vice-chair of IEEE MASS 2010, plenary panel chair of IEEE Globecom 2010, co-chair of IEEE Infocom Global Internet (GI) workshop 2008, and IFIP/IEEE MMNS 2006, vice-chair for IEEE ICPADS 2006, IEEE HiPC 2007, and local & poster chair for IEEE ICNP 2008 and 2009. He is ACM SIGMOBILE workshop coordination chair (for MobiCom, Mobihoc, Mobisys, Sensys) (since 2006). He has served on numerous committees of IEEE and ACM conferences on networks. He is a senior member of the IEEE and an ACM Distinguished Scientist.
In this talk we will deal with problems arising in device-to-device (D2D) wireless networks, where user devices also have the ability to cache content. In such networks, users are mobile and communication links can be spontaneously activated and dropped depending on the users’ relative position. Receivers request files from transmitters, these files having a certain popularity and file-size distribution. In our work a new performance metric is introduced, namely the Service Success Probability, which captures the specificities of D2D networks. For the Poisson Point Process case for node distribution and the \mathrmSNR coverage model, explicit expressions are derived. Simulations support the analytical results and explain the influence of mobility and file-size distribution on the system performance, while providing intuition on how to appropriately cache content on mobile storage space. Of particular interest is the investigation on how different file-size distributions (Exponential, Uniform, or Heavy-Tailed) influence the performance.
Ubiquitous smart technologies gradually transform modern homes into Intranet of Things, where a multitude of connected devices allow for novel home automation services (e.g., energy or bandwidth savings, comfort enhancement, etc.). Optimizing and enriching the Quality of Experience (QoE) of residential users emerges as a critical differentiator for Internet and Communication Service providers (ISPs and CSPs, respectively) and heavily relies on the analysis of various kinds of data (connectivity, performance,
usage) gathered from home networks. In this paper, we are interested in new Machine-to-Machine data analysis techniques that go beyond binary association rule mining for traditional market basket analysis considered by previous works, to analyze individual device logs of home gateways. Based on multidimensional patterns mining framework, we extract complex device co-usage patterns of 201 residential broadband users of an ISP, subscribed to a triple-play service. Such fine-grained device usage patterns
provide valuable insights for emerging use cases such as an adaptive usage of home devices, and also “things” recommendation.
We consider the problem of accurately estimating the reliability of workers based on noisy labels they provide, which is a fundamental question in crowdsourcing. We propose a novel lower bound on the minimax estimation error which applies to any estimation procedure. We further propose Triangular Estimation (TE), an algorithm for estimating the reliability of workers. TE has low complexity, may be implemented in a streaming setting when labels are provided by workers in real time, and does not rely on an iterative procedure. We further prove that TE is minimax optimal and matches our lower bound. We conclude by assessing the performance of TE and other state-of-the-art algorithms on both synthetic and real-world data sets.
Joint work with Thomas Bonald (Telecom ParisTech)
After a very prolific and interesting two-days technical talks at
the LINCS yearly workshop, few could fancy yet another technical presentation.
In this talk we illustrate the strong relationship between teaching and research
through three illustrative examples. The first [1] is a body of resarch work prompted
by student questions during a course. The second [2] is a body of work prompted to
explain native digitals the basic properties of their interconnected world,
namely the Internet. The third [3,4] is a body of work where undergraduate students are
actively involved during courses. The talks cover the material that usually goes behind the scenes,
but is as important as the results of the research itself.
[1] http://www.enst.fr/ drossi/ledbat
[2] http://www.enst.fr/ drossi/anycast
[3] check the INF570 page at http://www.enst.fr/ drossi/
[4] this will be covered but there are no links yet :)
It is known that given a CM sextic field, there exists a non-empty finite set of abelian varieties of dimension 3 that have complex multiplication by this field. Under certain conditions on the field and the CM-type, this abelian variety can be guaranteed to be principally polarizable and simple. This ensures that the abelian variety is the Jacobian of a hyperelliptic curve or a plane quartic curve.
In this talk, we begin by showing how to generate a full set of period matrices for each isomorphism class of simple, principally polarized abelian variety with CM by a sextic field K. We then show how to determine whether the abelian variety is a hyperelliptic or plane quartic curve. Finally, in the hyperelliptic case, we show how to compute a model for the curve. (Joint work with J. Balakrishnan, S. Ionica, and K. Lauter.)
Many of the most costly security compromises that enterprises suffer manifest
as tiny trickles of behavior hidden within an ocean of other site activity.
This talk exams design patterns applicable to developing robust detectors
for particular forms of such activity. The themes include research pitfalls,
the crucial need to leverage domain knowledge in an apt fashion, and why
machine learning is very difficult to effectively apply for such detection.
Vern Paxson is a Professor of Electrical Engineering and Computer Sciences
at UC Berkeley. He also leads the Networking and Security Group at the
International Computer Science Institute in Berkeley, and has an appointment
as a Staff Scientist at the Lawrence Berkeley National Laboratory. His
research focuses heavily on measurement-based analysis of network activity
and Internet attacks. He works extensively on high performance network
monitoring, detection algorithms, cybercrime, and countering censorship.
In 2006 he was inducted as a Fellow of the Association for Computing
Machinery (ACM). In 2011 he received ACM’s SIGCOMM Award, which recognizes
lifetime contribution to the field of communication networks, "for his
seminal contributions to the fields of Internet measurement and Internet
security, and for distinguished leadership and service to the Internet
community." His measurement work has also been recognized by ACM’s Grace
Murray Hopper Award and by the 2015 IEEE Internet Award. In 2013 he
co-founded Broala, a startup that provides commercial-grade support and
products for the "Bro" network monitoring system that he created and has
advanced through his research for many years.
We consider a network of multi-server queues wherein each job can be processed in parallel by any subset of servers within a pre-defined set that depends on its class. Each server is allocated in FCFS order at each queue. Jobs arrive according to Poisson processes, have independent exponential service requirements and are routed independently at random. We prove that the network state has a product-form stationary distribution, in the open, closed and mixed cases. From a practical perspective, we propose an algorithm on this basis to allocate the resources of a computer cluster.
I present our ACM UIST’15 Best Paper Award-winning research on Webstrates: Shareable Dynamic Media. In this work, we revisit Alan Kay’s early vision of dynamic media, which blur the distinction between document and application. We introduce shareable dynamic media, which are malleable by users, who may appropriate them in idiosyncratic ways; shareable among users, who can collaborate on multiple aspects of the media; and distributable across diverse devices and platforms. We present Webstrates, an environment for exploring shareable dynamic media. Webstrates augment web technology with real-time sharing. They turn web pages into substrates, i.e. software entities that act as applications or documents depending upon use. We illustrate Webstrates with two implemented case studies: users collaboratively author an article with functionally and visually different editors that they can personalize and extend at run-time; and they orchestrate its presentation and audience participation with multiple devices.
Use of anycast IP addresses has increased in the last few years: once relegated to DNS root and top-level domain servers, anycast is now commonly used to assist distribution of general purpose content by CDN providers. Yet, most anycast discovery methodologies rely so far on DNS, which limits their usefulness to this particular service. This raises the need for protocol agnostic methodologies, that should additionally be as lightweight as possible in order to scale up anycast service discovery.
Our anycast discovery method allows for exhaustive and accurate enumeration and city-level geolocation of anycast replicas, with the constraints of only leverages a handful of latency measurements from a set of known probes. The method exploits an iterative workflow to enumerate (optimization problem) and geolocate (classification problem) anycast instances. The method is so lightweight and protocol agnostic that we were able to perform several censuses of the whole IPv4 Internet, recollecting all anycast deployments. Finally, we carry on a passive study of anycast traffic to refine the picture of services currently served over IP anycast. All our code, dataset and further information are available at http://www.enst.fr/ drossi/anycast
In the "big data" era, data is often dirty in nature because of several reasons, such as typos, missing values, and duplicates. The intrinsic problem with dirty data is that it can lead to poor results in analytic tasks. Therefore, data cleaning is an unavoidable task in data preparation to have reliable data for final applications, such as querying and mining. Unfortunately, data cleaning is hard in practice and it requires a great amount of manual work. Several systems have been proposed to increase automation and scalability in the process. They rely on a formal, declarative approach based on first order logic: users provide high-level specifications of their tasks, and the systems compute optimal solutions without human intervention on the generated code. However, traditional "top-down" cleaning approaches quickly become unpractical when dealing with the complexity and variety found in big data.
In this talk, we first describe recent results in tackling data cleaning with a declarative approach. We then discuss how this experience has pushed several groups to propose new systems that recognize the central role of the users in cleaning big data.
Paolo Papotti is an Assistant Professor of Computer Science in the School of Computing, Informatics, and Decision Systems Engineering (CIDSE) at Arizona State University. He got his Ph.D. in Computer Science at Universita’ degli Studi Roma Tre (2007, Italy) and before joining ASU he had been a senior scientist at Qatar Computing Research Institute.
His research is focused on systems that assist users in complex, necessary tasks and that scale to large datasets with efficient algorithms and distributed platforms. His work has been recognized with two "Best of the Conference" citations (SIGMOD 2009, VLDB 2015) and with a best demo award at SIGMOD 2015. He is group leader for SIGMOD 2016 and associate editor for the ACM Journal of Data and Information Quality (JDIQ).
Resources such as Web pages or videos that are published in the Internet are referred to by their Uniform Resource Locator (URL). If a user accesses a resource via its URL, the host name part of the URL needs to be translated into a routable IP address. This translation is performed by the Domain Name System service (DNS). DNS also plays an important role when Content Distribution Networks (CDNs) are used to host replicas of popular objects on multiple servers that are located in geographically different areas
A CDN makes use of the DNS service to infer client location and direct the client request to optimal server. While most Internet Service Providers (ISPs) offer a DNS service to their customers, clients may instead use a public DNS service. The choice of the DNS service can impact the performance of clients when retrieving a resource from a given CDN. In this paper we study the impact on download performance for clients using either the DNS service of their ISP or the public DNS service provided by Google DNS. We adopt a causal approach that exposes the structural dependencies of the different parameters impacted by the DNS service used and we show how to model these dependencies with a Bayesian network. The Bayesian network allows us to explain and quantify the performance benefits seen by clients when using the DNS service of their ISP. We also discuss how the further improve client performance. Joint work with
Hadrien Hours, Patrick Loiseau, Alessandro Finamore and Marco Mellia
Wi-Fi is the preferred way of accessing the internet for many devices at home, but it is vulnerable to performance problems due to sharing an unlicensed medium. In this work, we propose a method to estimate the link capacity of a Wi-Fi link, using physical layer metrics that can be passively sampled on commodity access points. We build a model that predicts the maximum UDP throughput a device can sustain, extending previous models that do not consider IEEE 802.1n optimizations such as frame aggregation. We validate our method through controlled experiments in an anechoic chamber and artificially create different link quality conditions. We estimate the link capacity using our method and compare it to the state of the art. Over 95% of the link capacity predictions present errors below 5% when using our method with reference data. We show how the link capacity estimation enables Wi-Fi diagnosis in two case studies where we predict the available bandwidth under microwave interference and in an office environment.
Cloud-Radio Access Network (C-RAN) is a new emerging technology that holds alluring promises for Mobile network operators regarding capital and operation cost savings. However, many challenges still remain before full commercial deployment of C-RAN solutions. Dynamic resource allocation algorithms are needed to cope with significantly fluctuating traffic loads. Those algorithms must target not only a better quality of service delivery for users, but also less power consumption and better interference management, with the possibility to turn off RRHs that are not transmitting. To this end, we propose a dynamic two-stage design for downlink OFDMA resource allocation and BBU-RRH assignment in C-RAN. Simulation results show that our proposal achieves not only a high satisfaction rate for mobile users, but also minimal power consumption and significant BBUs savings, compared to state-of-the-art schemes.
Yazid Lyazidi received his Diploma in Computer Science and Telecommunications Engineering from INPT, Rabat, Morocco, in 2014 and the M.S. degree in Advanced Wireless Systems from SUPELEC, Gif sur Yvette, France the same year. He is now a PhD candidate in LIP6, University Pierre and Marie Curie, Paris, under the advisory of Prof. Rami Langar and Dr. Nadjib AITSAADI. His research topics include energy minimization and resource management in Cloud RAN.
Dynamic adaptive HTTP (DASH) based streaming is steadily becoming the most popular online video streaming technique. DASH streaming provides seamless playback by adapting the video quality to the network conditions during the video playback. A DASH server supports adaptive streaming by hosting multiple representations of the video and each representation is divided into small segments of equal playback duration. At the client end, the video player uses an adaptive bitrate selection (ABR) algorithm to decide the bitrate to be selected for each segment depending on the current network conditions. Currently proposed ABR algorithms ignore the fact that the segment sizes significantly vary for a given video bitrate. Due to this, even though an ABR algorithm is able to measure the network bandwidth, it may fail to predict the time to download the next segment. In this work, we propose a segment-aware rate adaptation (SARA) algorithm that considers the segment size variation in addition to the estimated path bandwidth and the current buffer occupancy to accurately predict the time required to download the next segment. Our results show that SARA provides a significant gain over the basic algorithm in the video quality delivered, without noticeably impacting the video switching rates.
Deep Medhi is Curators’ Professor in the Department of Computer Science and Electrical Engineering at the University of Missouri- Kansas City, USA. He received B.Sc. in Mathematics from Cotton College, Gauhati University, India, M.Sc. in Mathematics from the University of Delhi, India, and his Ph.D. in Computer Sciences from the University of Wisconsin-Madison, USA. Prior to joining UMKC in 1989, he was a member of the technical staff at AT&T Bell Laboratories. He was an invited visiting professor at the Technical University of Denmark, a visiting research fellow at Lund Institute of Technology, Sweden, a research visitor at University of Campinas, Brazil under the Brazilian Science Mobility Program and served as a Fulbright Senior Specialist. He is the Editor-in-Chief of Springers Journal of Network and Systems Management, and is on the editorial board of IEEE/ACM Transactions on Networking, IEEE Transactions on Network and Service Management, and IEEE Communications Surveys & Tutorials. He is co-author of the books, Routing, Flow, and Capacity Design in Communication and Computer Networks (2004) and Network Routing: Algorithms, Protocols, and Architectures (2007), both published by Morgan Kauffman/Elsevier.
Non-orthogonal multiple access (NOMA) is a promising candidate for wireless access in 5G cellular systems, and has recently received considerable attentions from both academia and industry. In a NOMA system, multiple users share the same carrier frequency at the same time, and their messages are decoded via successive interference cancellation (SIC). While the idea of SIC was proposed a long time ago, it is now regarded as a practical solution to wireless networks. For SIC to work properly, the signal-to-interference ratio of each user needs to be high enough for successful decoding and subsequent cancellation. To meet such a requirement, transmitter power control becomes indispensable. This talk focuses on power control for inter-cell interference management in NOMA systems. The classical power control results based on Perron-Frobenius theory of non-negative matrices will be reviewed, and how they can be applied to multi-cell NOMA systems will be presented. For practical application to 5G systems, it is desirable to have distributed algorithms for power control, which will also be discussed.
Dr. Chi Wan Sung is an Associate Professor in the Department of Electronic Engineering at City University of Hong Kong. He received the BEng, MPhil, and PhD degrees in information engineering from the Chinese University of Hong Kong in 1993, 1995, and 1998, respectively. He has served as editor for the ETRI journal and for the Transactions on Emerging Telecommunications Technologies. His research interests include power control and resource allocation, cooperative communications, network coding, and distributed storage systems.
The Internet is made of almost 50,000 ASes exchanging routing information
thanks to BGP. Inside each AS, information is redistributed via iBGP sessions.
This allows each router to map a destination exterior to the AS with a given
egress point. The main redistribution mechanisms used today, (iBGP full mesh, Route
Reflectors and BGP confederations), either guarantee selection of the best
egress point or enhance scalability, but not both. In this paper, we propose a new way
to perform iBGP redistribution in an AS based on its IGP topology, conciliating
optimality in route selection and scalability. Our contribution is threefold.
First, we demonstrate the tractability of our approach and its benefits.
Second, we provide an open-source implementation of our mechanism based on
Quagga. Third, we illustrate the feasibility of our approach through
simulations performed under ns-3 and compare its performance with full mesh
and Route Reflection. Joint work with Anthony Lambert (Orange Labs) Steve Uhlig (Queen Mary University of London), to appear at IEEE INFOCOM 2016,
Software Defined Networking paves the way for new services that enable better utilization of network resources. Bandwidth Calendaring (BWC) is a typical such example that exploits knowledge about future traffic, to optimally pack the arising demands over the network. We consider a generic BWC instance, where a network operator has to accommodate at minimum cost demands of predetermined, but time-varying, bandwidth requirements that can be scheduled within a specific time window. By exploiting the structure of the problem, we propose low complexity methods for the decomposition of the original Mixed integer problem into simpler ones. Our numerical results reveal that the proposed solution approach is near-optimal and outperforms standard methods based on linear programming relaxations and randomized rounding by more than 20%.
Lazaros Gkatzikis (S09M13) obtained the Ph.D. degree in computer engineering and communications from the University of Thessaly, Volos, Greece. Currently, he is a Research Staff Member at Huawei France Research Center, Paris, France. In the fall of 2011, he was a Research Intern at the Technicolor Paris Research Laboratory. He has been a Postdoctoral Researcher in the team of Prof. L. Tassiulas in Volos, Greece (2013) and at the KTH Royal Institute of Technology, Stockholm, Sweden (2014) working with Associate Prof. C. Fischione. His research interests include network optimization, mechanism design, and network performance analysis.
Telco operators view network densification as a viable solution for the challenging goals set for the next generation cellular networks. Among other goals, network densification would help accomodating the always increasing mobile demand or would allow to considerably reduce connection latency. Nevertheless, along with network densification, many issues arise. For example, sever inter-cell interference (ICI) may considerably limit network capacity if coordination among base stations is not used. To tackle such a problem, we focus on the well-known Almost Blank Sub-Frame (ABS or ABSF) solution. With ABSF, in order to reduce interference, not all the base stations are allowed to transmit at the same time. In this talk, we question the ability of ABSF of improving both aggregate system throughput and transmission efficiency and we show how ABSF may purse some other different objectives. In contrast, we show that a better way of improving system throughput is enabling Device-to-Device communications among UEs and opportunistic forwarding. Therefore, we propose a novel mechanism (OBS) that exploits and coordinates simultaneously ABSF and opportunistic forwarding to improve system throughput and user fairness. Our approach has been validated against state of the art approaches through an extensive simulation campaign, even using real data from a network operator.
We consider a restless multi-armed bandit in which each arm
can be in one two states. When an arm is sampled, the state of the arm
is not available to the sampler. Instead, a binary signal with a known
randomness that depends on the state of the arm is made available. No
signal is displayed if the arm is not sampled. An arm-dependent reward is
accrued from each sampling. In each time step, each arm changes state
according to known transition probabilities which in turn depend on
whether the arm is sampled or not sampled. Since the state of the arm is
never visible and has to be inferred from the current belief and a
possible binary signal, we call this the hidden Markov bandit. Our
interest is in a policy to select the arm(s) in each time step that
maximises the infinite horizon discounted reward. Specifically, we seek
the use of Whittles index in selecting the arms.
We first analyze the single-armed bandit and show that it admits an
approximate threshold-type optimal policy when the no-sample action
is subsidised. Next, we show that this also satisfies an approximate-
indexability property. Numerical examples support the analytical results
Computational Causal Discovery aims to induce causal models,
causal networks, and causal relations from observational data without
performing or by performing only few interventions (perbutations,
manipulations) of a system. While predictive analytics create models that
predict customer behavior for example, causal analytics create models that
dictate how to affect customer behavior. A recent approach to causal
discovery, which we call logic-based integrative causal discovery, will be
presented. This approach is more robust to statistical errors, makes more
realistic and less restrictive assumptions (e.g., admits latent confounding
factors and selection bias in the data) and accepts and reasons with
multiple heterogeneous datasets that are obtained under different sampling
criteria, different experimental conditions (perbubations, interventions),
and measuring different quantities (variables). The approach significantly
extends causal discovery based on Bayesian Networks, the simplest causal
model available, and is much more suitable for real business or scientific
data analysis.
Prof. Ioannis Tsamardinos is Associate Professor at the Computer
Science Department of University of Crete and co-founder of Gnosis Data
Analysis IKE. Prof. Tsamardinos has over 70 publications in international
journals, conferences, and books. He has participated in several national,
EU, and US funded research projects. Distinctions with colleagues and
students include the best performance in one of the four tasks in the
recent First Causality Challenge Competition, ISMB 2005 Best Poster Winner,
a Gold Medal in the Student Paper Competition in MEDINFO 2004, the
Outstanding Student Paper Award in AIPS 2000, the NASA Group Achievement
Award for participation in the Remote Agent team and others. He is a
regular reviewer for some of the leading Machine Learning journals and
conferences. Statistics on recognition of work include more than 4500
citations, and h-index of 28 (as estimated by the Publish or Perish tool).
Prof. Tsamardinos has recently been awarded the European and Greek national
grants of excellence, the ERC Consolidator and the ARISTEIA II grants
respectively (the equivalent of the NSF Young Investigator Award). Prof.
Tsamardinos has pioneered the integrative causal analysis and the
logic-based approach to causal discovery. The ERC grant in particular
regards the development of novel integrative causal discovery methods and
their application to biological mass cytometry data.
Lattice network coding is employed in physical-layer network coding, with applications to two-way relay networks and multiple-access channels. The receiver wants to compute a linear combination of the source symbols, which are drawn from a finite alphabet equipped with some algebraic structure. In this talk, we construct lattice network codes from lattices such as the E8 lattice and the Barnes-Wall lattice, which have the best known packing density and shaping gain in dimension 8 and 16, respectively. Simulation results demonstrate that there is a significant performance gain in comparison to the baseline lattice network code with hyper-cube shaping.
Kenneth Shum received the B.Eng. degree in Information Engineering from the Chinese University of Hong Kong in 1993, and the M.S. and Ph.D. degree in Electrical Engineering from University of Southern California in 1995 and 2000 respectively. He is now a research fellow in the Institute of Network Coding, CUHK. His research interests include information theory, coding theory and cooperative communication in wireless network. He is also a member of Composers and Authors Socity of Hong Kong (CASH) and has Erdos number 2.
HTTP is one crucial element of nowadays Internet, almost to the point of being nowadays that “thin waits” that used to be identified with the IP protocol not so long ago. Yet HTTP is changing, with recent proposals such as SPDY, HTTP2 and QUIC, that aims at addressing some of the long-standing shortcoming of the HTTP/1 protocol family.
In this talk, we will be exposing ongoing work to assess the impact that HTTP evolution is expected to have on
the quality of user experience (QoE), based on experiments where we collect feedback (i.e., Mean Opinion Scores) from a panel of users on real webpages, as well as define and automatically collect QoE metrics on the same experiments.
We consider a centralized content delivery infrastructure where a large number of storage-intensive files are replicated across several collocated servers. To achieve scalable delays in file downloads under stochastic loads, we allow multiple servers to work together as a pooled resource to meet individual download requests. In such systems important questions include: How and where to replicate files; How significant are the gains of resource pooling over policies which use single server per request; What are the tradeoffs among conflicting metrics such as delays, reliability and recovery costs, and power; How robust is performance to heterogeneity and choice of fairness criterion; etc.
In this talk we provide a simple performance model for large systems towards addressing these basic questions. For large systems where the overall system load is proportional to the number of servers, we establish scaling laws among delays, system load, number of file replicas, demand heterogeneity, power, and network capacity.
We approach the problem of computing geometric centralities, such as closeness and harmonic centrality, on very large graphs;
traditionally this task requires an all-pairs shortest-path computation in the exact case, or a number of breadth-first traversals
for approximated computations, but these techniques yield very weak statistical guarantees on highly disconnected graphs. We rather
assume that the graph is accessed in a semi-streaming fashion, that is, that adjacency lists are scanned almost sequentially, and that
a very small amount of memory (in the order of a dozen bytes) per node is available in core memory. We leverage the newly discovered
algorithms based on HyperLogLog counters, making it possible to approximate a number of geometric centralities at a very high speed
and with high accuracy. While the application of similar algorithms for the approximation of closeness was attempted in the MapReduce
framework, our exploitation of HyperLogLog counters reduces exponentially the memory footprint, paving the way for in-core processing
of networks with a hundred billion nodes using just 2TiB of RAM. Moreover, the computations we describe are inherently parallelizable,
and scale linearly with the number of available cores. Another application of the same framework is the computation of the distance distribution, and indeed we were able to use our algorithms to compute that Facebook has only four degrees of separation.
Sebastiano Vigna’s research focuses on the interaction between theory and practice. He has worked on highly
theoretical topics such as computability on the reals,distributed computability, self-stabilization, minimal
perfect hashing, succinct data structures, query recommendation, algorithms for large graphs, pseudorandom
number generation, theoretical/experimental analysis of spectral rankings such as PageRank, and axiomatization
of centrality measures, but he is also (co)author of several widely used software tools ranging from high-performance
Java libraries to a model-driven software generator, a search engine, a crawler, a text editor and a graph
compression framework. In 2011 he collaborated to the computation the distance distribution of the whole Facebook graph,
from which it was possible to evince that on Facebook there are just 3.74 degrees of separation. Recently, he participated
to the analysis of the largest available public web crawl (Common Crawl 2012), which led to the publication of the first
open ranking of web sites (http://wwwranking.webdatacommons.org/). His work on Elias-Fano coding and quasi-succinct
indices is at the basis of the code of Facebook’s "folly" library (https://github.com/facebook/folly/blob/master/folly/experimental/EliasFanoCoding.h).
He also collaborated to the first open ranking of Wikipedia pages (http://wikirank.di.unimi.it/), which is based on his body of work on centrality
in networks. His pseudorandom number generator xorshift128+ is currently used by the JavaScript engine V8 of Chrome, as well by Safari and Firefox,
and it is the stock generator of the Erlang language. Sebastiano Vigna obtained his PhD in Computer Science from the Universita’ degli
Studi di Milano, where he is currently an Associate Professor.
Large scale deployments of general cache networks,
such as Content Delivery Networks or Information Centric
Networking architectures, arise new challenges regarding their
performance prediction and network planning. Analytical models
and MonteCarlo approaches are already available to the scientific
community. However, complex interactions between replacement,
replication, and routing on arbitrary topologies make these approaches
hardly configurable. Additionally, huge content catalogs
and large networks sizes add non trivial scalability problems,
making their solution computationally demanding.
We propose a new technique for the performance evaluation of
large scale caching systems that intelligently integrates elements
of stochastic analysis within a MonteCarlo approach. Our method
leverages the intuition that the behavior of realistic networks of
caches, being them LRU or even more complex caches, can be
well represented by means of much simpler Time-To-Live (TTL)-
based caches. This TTL can be either set with the guidance of a simple
yet accurate stochastic model (e.g., the characteristic time of the Che approximation), or can be provided as very rough guesses, that are iteratively corrected by a feedback loop to ensure convergence.
Through a thorough validation campaign, we show that the
synergy between modeling and MonteCarlo approaches has
noticeable potentials both in accurately predicting steady state
performance metrics within 2% accuracy, while significantly
scaling down simulation time and memory requirements of large
scale scenarios by to two orders of magnitude. Furthermore, we
demonstrate the flexibility and efficiency of our hybrid approach
in simplifying fine-grained analyses of dynamic scenarios.
Graphs are used to represent a plethora of phenomena, from the Web and social networks, to biological pathways, to semantic knowledge bases. Arguably the most interesting and important questions one can ask about graphs have to do with their evolution, such as which Web pages are showing an increasing popularity trend; or how does influence propagate in social networks, how does knowledge evolve, etc. In this talk I will present Portal, a declarative language for efficient querying and exploratory analysis of evolving graphs. I will describe an implementation of Portal in scope of Apache Spark, an open-source distributed data processing framework, and will demonstrate that careful engineering can lead to good performance. Finally, I will describe our work on a visual query composer for Portal.
We describe in this talk the new possibilities offered by virtualization techniques in the design of 5G networks. We precisely introduce a convergent gateway realizing fixed/mobile convergence. Such a functional element is based on modules instantiated on virtual machines (or dockers), each module implementing specific tasks for convergence. A convergent gateway can be instantiated by a network operating system (GlobalOS) in charge of managing the network. One important component of GlobalOS is the orchestration of resources. We introduce some algorithms for resource orchestration in the framework of GlobalOS. We finally focus on the specific case of virtualized base band unit (BBU) functions.
Connected devices, as key constituent elements of the Internet of Things (IoT), are flooding our real world environment. This digital wave paves the way for a major technological breakthrough called to deeply change our daily lives. Nevertheless, some strong issues remain to be addressed. The most dominant one relates to our ability to leverage the whole IoT service space and, more specifically, to our ability to compose IoT services from multiple connected devices by cleverly selecting them with the required software functions, whatever our technical skills. In such a challenging context, we first propose a rich and flexible abstraction framework relying on Attributed Typed Graphs, which enables to represent how known IoT services are composed from different perspectives. Then, capitalizing on this modeling tool and focusing on the way IoT services interact with the physical environment, lightweight service signatures are computed by using a physical-interfaced-based algorithm in order to characterize IoT services. Finally, we discuss how leveraging the computed signatures can allow for autonomously recommending IoT services to end-users.
Identifying causal (rather than merely correlative)
relationships in physical systems is a difficult task, particularly if
it is not feasible to perform controlled experiments. Granger’s
notion of causality was developed first in economics beginning in the
1960s and can be used to form a network of "plausible causal
relations" given only the opportunity to observe the system. This
method is applied, for example, in neuro-imaging to identify
relationships amongst brain regions, and in biostatistics to explore
gene regulatory networks. In this talk, we provide an overview of the
notion of Granger Causality, some methods for learning Granger
Causality Networks in practice, and our current directions for
research. (Provide anonymous feedback at https://www.surveymonkey.com/r/YVQJ99X )
In a competitive setting, we consider the problem faced by a firm that makes decisions concerning both the location and service levels of its facilities, taking into account that users patronize the facility that maximizes their individual utility, expressed as the sum of travel time, queueing delay, and a random term. This situation can be modelled as a mathematical program with equilibrium constraints that involves discrete and continuous variables, as well as linear and nonlinear functions. This program is reformulated as a standard bilevel program that can be approximated, through the linearization of the nonlinear functions involved, as a mixed integer linear program that yields quasi-optimal’ solutions. Since this approach does not scale well, we have in parallel developed heuristic procedures that exploit the very structure of the problem. Based on theoretical and computation results pertaining to this application, we will discuss further developments in the area of nonlinear facility location.
Temporal collective profiles generated by mobile network users can be used to predict network usage, which in turn can be used to improve the performance of the network to meet user demands. This presentation will talk about a prediction method of temporal collective profiles which is suitable for online network management. Using weighted graph representation, the target sample is observed during a given period to determine a set of neighboring profiles that are considered to behave similarly enough. The prediction of the target profile is based on the weighted average of its neighbors, where the optimal number of neighbors are selected through a form of variable neighborhood search. This method is applied to two datasets, one provided by a mobile network service provider and the other from a Wi-Fi service provider. The proposed prediction method can conveniently characterize user behavior via graph representation, while outperforming existing prediction methods. Also, unlike existing methods that utilize categorization, it has a low computational complexity, which makes it suitable for online network analysis.
Understanding network health is essential to improving Internet reliability. For instance, detecting disruptions in peer and provider networks identifies fixable connectivity problems. Currently this task is time consuming as it involves a fair amount of manual observation because operators have little visibility into other networks.
Here we leverage existing public RIPE Atlas measurement data to monitor and analyze network conditions; creating no new measurements. We demonstrate a set of complementary methods to detect network disruptions using traceroute measurements. A novel method of detecting changes in delay is used to identify congested links, and a packet forwarding model is employed to predict traffic paths and to identify faulty routers and links in cases of packet loss. In addition, aggregating results from each method allows us to easily monitor a network and identify coordinated reports manifesting significant network disruptions, reducing uninteresting alarms.
Our contributions consist of a statistical approach to providing robust estimation or Internet delays and the study of hundreds of thousands link delays. We present three cases demonstrating that the proposed methods detect real disruptions and provide valuable insights, as well as surprising findings, on the location and impact of identified events.
Community detection is a fundamental problem in the field of graph mining. The objective is to find densely connected clusters of nodes, so-called communities, possibly overlapping. While most existing algorithms work on the entire graph, it is often irrelevant in practice to cluster all nodes. A more practically interesting problem is to detect the community to which a given set of nodes, the so-called "seed nodes", belong. Moreover, the exploration of the whole network is generally computationally expensive, if not impossible, and algorithms that only take into account the local structure of the graph around seed nodes provide a big advantage. For these reasons, there is a growing interest in the problem of "local" community detection, also known as "seed set expansion". We solve this problem through a low-dimensional embedding of the graph based on random walks starting from the seed nodes.
Recently we are in the middle of structural changes toward software-defined ICT infrastructure, which attempts to transform the existing silo-based infrastructure into futuristic composable one by integrating IoT-based smart/mobile things, SDN-coordinated interconnect edges, and NFV-assisted and software-driven cloud core. This end-user-driven infrastructure transform can be supported by diverse open-source community projects (e.g., Linux Foundation’s OVS/OpenSwitch/ONOS/CORD/ODL/OPNFV/Open-O/..., Facebook-initiated OCP, and others). Aligning with this upcoming transition, in this talk, the prototyping experience of OF@KOREN & OF@TEIN SmartX playgrounds will be shared by focusing on the hyper-convergent SmartX Boxes. Then, for multisite edge clouds, the on-going design trials of affordable SmartX K-Cluster will be explained. Finally, by leveraging DevOps-based automation, preliminary prototyping for IoT-Cloud services will be discussed by taking an example service scenario for smart energy.
We have examined maximum vertex coloring of random geometric graphs, in an arbitrary but fixed dimension, with a constant number of colors, in a recent work with S. Borst. Since this problem is neither scale-invariant nor smooth, the usual methodology to obtain limit laws cannot be applied. We therefore leverage different concepts based on subadditivity to establish convergence laws for the maximum number of vertices that can be colored. For the constants that appear in these results, we have provided the exact value in dimension one, and upper and lower bounds in higher dimensions.
In an ongoing work with B. Blaszczyszyn, we study the distributional properties of maximum vertex coloring of random geometric graphs. Moreover, we intend to generalize the study over weakly-μ-sub-Poisson processes.
The Jupyter Notebook (http://jupyter.org) is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more. In this hands-on talk, we will learn how to use a notebook, how to use plugins (as well as a set of useful ones), and how to make online-presentations (such as http://www.lincs.fr/wp-content/uploads/2013/01/04-Power-Law-Course.html and
https://www.lincs.fr/wp-content/uploads/2016/10/kleinberg.html)
We consider a scenario where an Internet Service Provider (ISP) serves users that choose digital content among M Content Providers (CP). In the status quo, these users pay both access fees to the ISP and content fees to each chosen CP; however, neither the ISP nor the CPs share their profit. We revisit this model by introducing a different business model where the ISP and the CP may have motivation to collaborate in the framework of caching. The key idea is that the ISP deploys a cache for a CP provided that they share both the deployment cost and the additional profit that arises due to caching. Under the prism of coalitional games, our contributions include the application of the Shapley value for a fair splitting of the profit, the stability analysis of the coalition and the derivation of closed-form formulas for the optimal caching policy.
Our model captures not only the case of non-overlapping contents among the CPs, but also the more challenging case of overlapping contents; for the latter case, a non-cooperative game among the CPs is introduced and analyzed to capture the negative externality on the demand of a particular CP when caches for other CPs are deployed.
Joint work with S. Elayoubi, E. Altman, and Y. Hayel to be presented at the 10th EAI International Conference on Performance Evaluation Methodologies and Tools (Valuetools 2016). The full version of the paper has been selected to be published in a special issue of the Elsevier journal of Performance Evaluation (PEVA).
Bootstrap percolation is a well-known activation process in a graph,
in which a node becomes active when it has at least r active neighbors.
Such process, originally studied on regular structures, has been recently
investigated also in the context of random graphs, where it can serve as a simple
model for a wide variety of cascades, such as the
spreading of ideas, trends, viral contents, etc. over large social networks.
In particular, it has been shown that in G(n,p) the final active set
can exhibit a phase transition for a sub-linear number of seeds.
In this paper, we propose a unique framework to study similar
sub-linear phase transitions for a much broader class of graph models
and epidemic processes. Specifically, we consider i) a generalized version
of bootstrap percolation in G(n,p) with random activation thresholds
and random node-to-node influences; ii) different random graph models,
including graphs with given degree sequence and graphs with
community structure (block model). The common thread of our work is to
show the surprising sensitivity of the critical seed set size
to extreme values of distributions, which makes some systems dramatically
vulnerable to large-scale outbreaks. We validate our results running simulation on
both synthetic and real graphs. Joint work with M. Garetto and G. Torrisi, appeared at ACM SIGMETRIC 2016.
The Web is the largest public big data repository that humankind has
created. In this overwhelming data ocean, we need to be aware of the
quality and, in particular, of the biases that exist in this data. In
the Web, biases also come from redundancy and spam, as well as from
algorithms that we design to improve the user experience. This problem
is further exacerbated by biases that are added by these algorithms,
specially in the context of search and recommendation systems. They
include selection and presentation bias in many forms, interaction bias,
social bias, etc. We give several examples and their relation to sparsity
and privacy, stressing the importance of the user context to avoid these
biases.
Ricardo Baeza-Yates areas of expertise are web search and data mining,
information retrieval, data science and algorithms. He is CTO of NTENT, a semantic search technology company. Before he was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from January 2006 to February 2016. He also is part time Professor at DTIC of the Universitat Pompeu Fabra, in Barcelona, Spain, as well as at DCC of Universidad de Chile in Santiago. Until 2004 he was Professor and founding director of the Center for Web Research at the later place. He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the board of governors of the IEEE Computer Society and in 2012 he was elected for the ACM Council. Since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow, among other awards and distinctions.
Genomic computing is a new science focused on understanding the
functioning of the genome, as a premise to fundamental discoveries in
biology and medicine. Next Generation Sequencing (NGS) allows the
production of the entire human genome sequence at a cost of about 1000
US $; many algorithms exist for the extraction of genome features, or
"signals", including peaks (enriched regions), mutations, or gene
expression (intensity of transcription activity). The missing gap is a
system supporting data integration and exploration, giving a biological
meaning to all the available information; such a system can be used,
e.g., for better understanding cancer or how environment influences
cancer development.
The GeCo Project (Data-Driven Genomic Computing, ERC Advanced Grant
currently undergoing the contract preparation) has the objective or
revisiting genomic computing through the lens of basic data management,
through models, languages, and instruments; the research group of DEIB
is among the few which are centering their focus on genomic data
integration. Starting from an abstract model, we already developed a
system that can be used to query processed data produced by several
large Genomic Consortia, including Encode and TCGA; the system employs
internally the Spark, Flink, and SciDB data engines, and prototypes can
already be accessed from Cineca servers or be downloaded from PoliMi
servers. During the five-years of the ERC project, the system will be
enriched with data analysis tools and environments and will be made
increasingly efficient.
Most diseases have a genetic component, hence a system which is capable
of integrating big data of genomics is of paramount importance. Among
the objectives of the project, the creation of an open source system
available to biological and clinical research; while the GeCo project
will provide public services which only use public data (anonymized and
made available for secondary use, i.e., knowledge discovery), the use of
the GeCo system within protected clinical contexts will enable
personalized medicine, i.e. the adaptation of therapies to specific
genetic features of patients. The most ambitious objective is the
development, during the 5-years ERC project, of an Internet for
Genomics, i.e. a protocol for collecting data from Consortia and
individual researchers, and a Google for Genomics, supporting indexing
and search over huge collections of genomic datasets.
Vaucanson-R is a software platform writen essentially in
C++ (and python) for the manipulation of finite automata and
transducers in a very general setting. It is the last generation of a
series of libraries started in 2001. Its philosophy comes from this
long experience and is threefold: efficiency, genericity and
accessibility.
The platform indeed provides different access-points (generic C++, C++,
python, command-line program) depending on one’s knowledge in
programming. It is indeed easy to devise and/or execute simple
algorithms on standard (boolean) automata and get a visual feedback of
the result. On the other hand, it is also possible to write efficient
and generic programs that will work on weighted automata, for many
kinds of weighted semirings.
In this presentation, we will show how to use the python and
command-line layers interactively: building automata, executing
standard algorithms, etc. We will then give a few hints on how to go
further.
Joint work with Sylvain Lombardy (Bordeaux), Nelma Moreira (Porto),
Rogrio Reis (Porto), and Jacques Sakarovitch.
Since the beginning, the Vaucanson project has been supported by the
InfRes department and LTCI at Telecom ParisTech. It has been also
also supported by an ANR Project (2011-2014). Until 2014, Vaucanson
has been developed as a joint project with the LRDE at EPITA (Akim
Demaille, Alexandre Duret-Lutz and their students).
Victor Marsault defended in 2016 his thesis in computer
science at Tlcom-Paristech. He will hold a post-doctoral position
in the University of Lige, Belgium starting in October.
Motivated by community detection, we characterise the spectrum of the non-backtracking matrix B in the Degree-Corrected Stochastic Block Model.
Specifically, we consider a random graph on n vertices partitioned into two equal-sized clusters. The vertices have i.i.d. weights { \phi_u }_u=1^n with second moment \PHItwo. The intra-cluster connection probability for vertices u and v is \phi_u \phi_v a/b and the inter-cluster connection probability is \phi_u \phi_v b/n.
We show that with high probability, the following holds: The leading eigenvalue of the non-backtracking matrix B is asymptotic to ρ= (a+b)/2 \PHItwo. The second eigenvalue is asymptotic to \mu_2 = (a+b)/2 \PHItwo when \mu_2^2 > ρ, but asymptotically bounded by \sqrtρ when \mu_2^2 ≤ρ. All the remaining eigenvalues are asymptotically bounded by \sqrtρ. As a result, a clustering positively-correlated with the true communities can be obtained based on the second eigenvector of B in the regime where \mu_2^2 > ρ.
In a previous work we obtained that detection is impossible when \mu_2^2 < ρ, meaning that there occurs a phase-transition in the sparse regime of the Degree-Corrected Stochastic Block Model.
As a corollary, we obtain that Degree-Corrected Erdos-Renyi graphs asymptotically satisfy the graph Riemann hypothesis, a quasi-Ramanujan property.
A by-product of our proof is a weak law of large numbers for local-functionals on Degree-Corrected Stochastic Block Models, which could be of independent interest.
Network Function Virtualization (NFV) is an emerging approach that has received attention from both academia and industry as a way to improve flexibility, efficiency, and manageability of networks. NFV enables new ways to operate networks and to provide composite network services, opening the path toward new business models. As in cloud computing with the Infrastructure as a Service model, clients will be offered the capability to provision and instantiate Virtual Network Functions (VNF) on the NFV infrastructure of the network operators. In this paper, we consider the case where leftover VNF capacities are offered for bid. This approach is particularly interesting for clients to punctually provision resources to absorb peak or unpredictable demands and for operators to increase their revenues. We propose a game theoretic approach and make use of Multi-Unit Combinatorial Auctions to select the winning clients and the price they pay. Such a formulation allows clients to express their VNF requests according to their specific objectives. We solve this problem with a greedy heuristic and prove that this approximation of economic efficiency is the closest attainable in polynomial time and provides a payment system that motivates bidders to submit their true valuations. Simulation results show that the proposed heuristic achieves a market valuation close to the optimal (less than 10 % deviation) and guarantees that an important part of this valuation is paid as revenue to the operator. Joint work with Jean-Louis Rougier, Luigi Iannone, Mathieu Bouet and Vania Conan, to appear at ITC28 https://itc28.org/
In-network caching is an appealing solution to cope with the increasing bandwidth demand of video, audio and data transfer over the Internet. Nonetheless, an increasing share of content delivery services adopt encryption through HTTPS, which is not compatible with traditional ISP-managed approaches like transparent and proxy caching. This raises the need for solutions involving both Internet Service Providers (ISP) and Content Providers (CP): by design, the solution should preserve business-critical CP information (e.g., content popularity, user preferences) on the one hand, while allowing for a deeper integration of caches in the ISP architecture (e.g., in 5G femto-cells) on the other hand. In this paper we address this issue by considering a content-oblivious ISP-operated cache. The ISP allocates the cache storage to various content providers so as to maximize the bandwidth savings provided by the cache: the main novelty lies in the fact that, to protect business-critical information, ISPs only need to measure the aggregated miss rates of the individual CPs. We propose a cache allocation algorithm based on a perturbed stochastic subgradient method, and prove that the algorithm converges to the allocation that maximizes the overall cache hit rate. We use extensive simulations to validate the algorithm and to assess its convergence rate under stationary and non-stationary content popularities. Our results (i) testify the feasibility of content-oblivious caches and (ii) show that the proposed algorithm can achieve within 15% from the global optimum in our evaluation. Joint work with Gyorgy Dan and Dario Rossi, to appear at ITC28 https://itc28.org/
We present a general framework for understanding system intelligence, i.e., the level of system smartness perceived by users, and propose a novel metric for measuring intelligence levels of dynamical systems, defined to be the maximum average reward obtained by proactively serving user demands, subject to a resource constraint. We provide an explicit characterization of the system intelligence, and show that it is jointly determined by user demand volume (opportunity to impress), demand correlation (user predictability), and system resource and action costs (flexibility to pre-serve). We then propose an online learning-aided control algorithm called Learning-aided Budget-limited Intelligent System Control (LBISC). We show that LBISC achieves an intelligence that is within O(N(T)^1/2 + ) of the highest level, where N(T) represents the number of data samples collected within a learning period T and is proportional to the user population size in the system. Moreover, we show that LBISC possesses a much faster convergence time compared to non-learning based algorithms. The analysis of LBISC rigorously quantifies the impacts of data and user population, learning, and control on achievable system intelligence, and provides novel insight and guideline into designing future smart systems.
The future of social networking is in the mobile world. Future network services are expected to center around human activity and behavior. Wireless networks (including ad hoc, sensor networks and DTNs) are expected to grow significantly and accommodate higher levels of mobility and interaction. In such a highly dynamic environment, networks need to adapt efficiently (performance-wise) and gracefully (correctness and functionality-wise) to growth and dynamics in many dimensions, including behavioral and mobility patterns, on-line activity and load. Understanding and realistically modeling this multi-dimensional space is essential to the design and evaluation of efficient protocols and services of the future Internet.
This level of understanding to drive the modeling and protocol design shall be developed using data-driven paradigm. The design philosophy for the proposed paradigm is unique in that it begins by intensive analysis of measurements from the target contexts, which then drive the modeling, protocol and service design through a systematic framework, called TRACE. Components of TRACE include: 1. Tracing and monitoring of behavior, 2. Representing and Analyzing the data, 3. Characterizing behavioral profiles using data mining and clustering techniques, and finally 4. Employing the understanding and insight attained into developing realistic models of mobile user behavior, and designing efficient protocols and services for future mobile societies.
Tracing at a large scale represents the next frontier for sensor networks (sensing the human society). Our latest progress in that field (MobiLib) shall be presented, along with data mining and machine learning tools to meaningfully analyze the data. Several challenges will be presented and novel use of clustering algorithms will be provided. Major contributions to modeling of human mobility; the time variant community model, TVC and Community Mobility (COBRA) will also be discussed. In addition, a novel framework for measuring vehicular mobility at planet scale, using thousands of webcams around the world, shall be presented.
Insights developed through analysis, mining and modeling will be utilized to introduce and design a novel communication paradigm, called profile-cast, to support new classes of service for interest-aware routing and dissemination of information, queries and resource discovery, trust and participatory sensing (crowd sourcing) in future mobile networks. Unlike conventional - unicast, multicast or directory based - paradigms, the proposed paradigm infers user interest using implicit behavioral profiling via self-monitoring and mining techniques. In order to capture interest, a spatio-temporal representation is introduced to capture users behavioral-space. Users can identify similarity of interest based on their position in such space.
The proposed profile-cast paradigm will act as enabler to new classes of service, ranging from mobile social networking, and navigation of mobile societies and spaces, to computational health care, mhealth (mobile health), emergency management and education, among others. The ideas of similarity-based support groups will be specifically highlighted for potential applications in disease-self management, collaborative education, and emergency response.
Dr. Ahmed Helmy is a Professor and Graduate Director at the Computer and Information Science and Engineering (CISE) Department at the University of Florida (UF). He received his Ph.D. in Computer Science 1999 from the University of Southern California (USC), M.Sc. in Electrical Engineering (EE) 1995 from USC, M.Sc. in Engineering Mathematics in 1994 and B.Sc. in Electronics and Communications Engineering 1992 from Cairo University, Egypt. He was a key researcher in the Network Simulator NS-2 and Protocol-Independent Multicast (PIM) projects at USC/ISI from 1995 to 1999. Before joining UF in 2006, he was on the Electrical Engineering-Systems Department faculty at USC starting Fall 1999, where he founded and directed the Wireless and Sensor Networks Labs.
In 2002, he received the NSF CAREER Award for his research on resource discovery and mobility modeling in large-scale wireless networks (MARS). In 2000 he received the Zumberge Award, and in 2002 he received the best paper award from the IEEE/IFIP MMNS Conference. In 2003 he was the Electrical Engineering nominee for the USC Engineering Jr. Faculty Research Award, and a nominee for the Sloan Fellowship. In 2004 and 2005 he got the best faculty merit ranking at the Electrical Engineering department at USC. He was a winner in the ACM MobiCom 2007, a finalist in 2008 SRC competitions, a 2nd place winner in ACM MobiCom WiNTECH demo competition 2010, and a finalist/runner-up in the 2012 ACM MobiCom SRC competition. In ’13 he won the best paper award from ACM SIGSPATIAL IWCTS. In 2014 he won the Epilepsy Foundation award for innovation, and the ACM MobiCom Mobile App Competition (1st place) and startup pitch competition (2nd place). In 2015 he won the Internet Technical Committee (ITC) best paper award by for seven IEEE ComSoc conferences/symposia of 2013. He is leading (or has led) several NSF funded projects including MARS, STRESS, ACQUIRE, AWARE and MobiBench.
His research interests include design, analysis and measurement of wireless ad hoc, sensor and mobile social networks, mobility modeling, multicast protocols, IP mobility and network simulation. He has published over 150 journal articles, conference papers and posters, book chapters, IETF RFCs and Internet drafts. His research is (or has been) supported by grants from NSF, KACST, Aalto University, USC, Intel, Cisco, DARPA, NASA, Nortel, HP, Pratt & Whitney, Siemens and SGI. He has over 12,200 citations with H-index=48 (Google Scholar).
Dr. Helmy is an editor of the IEEE Transactions on Mobile Computing (TMC), an area editor of the Ad hoc Networks Journal - ElSevier (since 2004), and an area editor of the IEEE Computer (since 2010). He was the finance chair of ACM MobiCom ’13, co-chair of ACM MobiSys HotPlanet’12, program co-chair for ACM MSWiM 2011, and ACM MobiCom CHANTS workshop 2011, co-chair of AdhocNets 2011, honorary program chair of IEEE/ACM IWCMC 2011, general chair of IWCMC 2010, vice-chair of IEEE MASS 2010, plenary panel chair of IEEE Globecom 2010, co-chair of IEEE Infocom Global Internet (GI) workshop 2008, and IFIP/IEEE MMNS 2006, vice-chair for IEEE ICPADS 2006, IEEE HiPC 2007, and local & poster chair for IEEE ICNP 2008 and 2009. He is ACM SIGMOBILE workshop coordination chair (for MobiCom, Mobihoc, Mobisys, Sensys) (since 2006). He has served on numerous committees of IEEE and ACM conferences on networks. He is a senior member of the IEEE and an ACM Distinguished Scientist.
In this talk we will deal with problems arising in device-to-device (D2D) wireless networks, where user devices also have the ability to cache content. In such networks, users are mobile and communication links can be spontaneously activated and dropped depending on the users’ relative position. Receivers request files from transmitters, these files having a certain popularity and file-size distribution. In our work a new performance metric is introduced, namely the Service Success Probability, which captures the specificities of D2D networks. For the Poisson Point Process case for node distribution and the \mathrmSNR coverage model, explicit expressions are derived. Simulations support the analytical results and explain the influence of mobility and file-size distribution on the system performance, while providing intuition on how to appropriately cache content on mobile storage space. Of particular interest is the investigation on how different file-size distributions (Exponential, Uniform, or Heavy-Tailed) influence the performance.
Ubiquitous smart technologies gradually transform modern homes into Intranet of Things, where a multitude of connected devices allow for novel home automation services (e.g., energy or bandwidth savings, comfort enhancement, etc.). Optimizing and enriching the Quality of Experience (QoE) of residential users emerges as a critical differentiator for Internet and Communication Service providers (ISPs and CSPs, respectively) and heavily relies on the analysis of various kinds of data (connectivity, performance,
usage) gathered from home networks. In this paper, we are interested in new Machine-to-Machine data analysis techniques that go beyond binary association rule mining for traditional market basket analysis considered by previous works, to analyze individual device logs of home gateways. Based on multidimensional patterns mining framework, we extract complex device co-usage patterns of 201 residential broadband users of an ISP, subscribed to a triple-play service. Such fine-grained device usage patterns
provide valuable insights for emerging use cases such as an adaptive usage of home devices, and also “things” recommendation.
We consider the problem of accurately estimating the reliability of workers based on noisy labels they provide, which is a fundamental question in crowdsourcing. We propose a novel lower bound on the minimax estimation error which applies to any estimation procedure. We further propose Triangular Estimation (TE), an algorithm for estimating the reliability of workers. TE has low complexity, may be implemented in a streaming setting when labels are provided by workers in real time, and does not rely on an iterative procedure. We further prove that TE is minimax optimal and matches our lower bound. We conclude by assessing the performance of TE and other state-of-the-art algorithms on both synthetic and real-world data sets.
Joint work with Thomas Bonald (Telecom ParisTech)
After a very prolific and interesting two-days technical talks at
the LINCS yearly workshop, few could fancy yet another technical presentation.
In this talk we illustrate the strong relationship between teaching and research
through three illustrative examples. The first [1] is a body of resarch work prompted
by student questions during a course. The second [2] is a body of work prompted to
explain native digitals the basic properties of their interconnected world,
namely the Internet. The third [3,4] is a body of work where undergraduate students are
actively involved during courses. The talks cover the material that usually goes behind the scenes,
but is as important as the results of the research itself.
[1] http://www.enst.fr/ drossi/ledbat
[2] http://www.enst.fr/ drossi/anycast
[3] check the INF570 page at http://www.enst.fr/ drossi/
[4] this will be covered but there are no links yet :)
It is known that given a CM sextic field, there exists a non-empty finite set of abelian varieties of dimension 3 that have complex multiplication by this field. Under certain conditions on the field and the CM-type, this abelian variety can be guaranteed to be principally polarizable and simple. This ensures that the abelian variety is the Jacobian of a hyperelliptic curve or a plane quartic curve.
In this talk, we begin by showing how to generate a full set of period matrices for each isomorphism class of simple, principally polarized abelian variety with CM by a sextic field K. We then show how to determine whether the abelian variety is a hyperelliptic or plane quartic curve. Finally, in the hyperelliptic case, we show how to compute a model for the curve. (Joint work with J. Balakrishnan, S. Ionica, and K. Lauter.)
Many of the most costly security compromises that enterprises suffer manifest
as tiny trickles of behavior hidden within an ocean of other site activity.
This talk exams design patterns applicable to developing robust detectors
for particular forms of such activity. The themes include research pitfalls,
the crucial need to leverage domain knowledge in an apt fashion, and why
machine learning is very difficult to effectively apply for such detection.
Vern Paxson is a Professor of Electrical Engineering and Computer Sciences
at UC Berkeley. He also leads the Networking and Security Group at the
International Computer Science Institute in Berkeley, and has an appointment
as a Staff Scientist at the Lawrence Berkeley National Laboratory. His
research focuses heavily on measurement-based analysis of network activity
and Internet attacks. He works extensively on high performance network
monitoring, detection algorithms, cybercrime, and countering censorship.
In 2006 he was inducted as a Fellow of the Association for Computing
Machinery (ACM). In 2011 he received ACM’s SIGCOMM Award, which recognizes
lifetime contribution to the field of communication networks, "for his
seminal contributions to the fields of Internet measurement and Internet
security, and for distinguished leadership and service to the Internet
community." His measurement work has also been recognized by ACM’s Grace
Murray Hopper Award and by the 2015 IEEE Internet Award. In 2013 he
co-founded Broala, a startup that provides commercial-grade support and
products for the "Bro" network monitoring system that he created and has
advanced through his research for many years.
We consider a network of multi-server queues wherein each job can be processed in parallel by any subset of servers within a pre-defined set that depends on its class. Each server is allocated in FCFS order at each queue. Jobs arrive according to Poisson processes, have independent exponential service requirements and are routed independently at random. We prove that the network state has a product-form stationary distribution, in the open, closed and mixed cases. From a practical perspective, we propose an algorithm on this basis to allocate the resources of a computer cluster.
I present our ACM UIST’15 Best Paper Award-winning research on Webstrates: Shareable Dynamic Media. In this work, we revisit Alan Kay’s early vision of dynamic media, which blur the distinction between document and application. We introduce shareable dynamic media, which are malleable by users, who may appropriate them in idiosyncratic ways; shareable among users, who can collaborate on multiple aspects of the media; and distributable across diverse devices and platforms. We present Webstrates, an environment for exploring shareable dynamic media. Webstrates augment web technology with real-time sharing. They turn web pages into substrates, i.e. software entities that act as applications or documents depending upon use. We illustrate Webstrates with two implemented case studies: users collaboratively author an article with functionally and visually different editors that they can personalize and extend at run-time; and they orchestrate its presentation and audience participation with multiple devices.
Use of anycast IP addresses has increased in the last few years: once relegated to DNS root and top-level domain servers, anycast is now commonly used to assist distribution of general purpose content by CDN providers. Yet, most anycast discovery methodologies rely so far on DNS, which limits their usefulness to this particular service. This raises the need for protocol agnostic methodologies, that should additionally be as lightweight as possible in order to scale up anycast service discovery.
Our anycast discovery method allows for exhaustive and accurate enumeration and city-level geolocation of anycast replicas, with the constraints of only leverages a handful of latency measurements from a set of known probes. The method exploits an iterative workflow to enumerate (optimization problem) and geolocate (classification problem) anycast instances. The method is so lightweight and protocol agnostic that we were able to perform several censuses of the whole IPv4 Internet, recollecting all anycast deployments. Finally, we carry on a passive study of anycast traffic to refine the picture of services currently served over IP anycast. All our code, dataset and further information are available at http://www.enst.fr/ drossi/anycast
In the "big data" era, data is often dirty in nature because of several reasons, such as typos, missing values, and duplicates. The intrinsic problem with dirty data is that it can lead to poor results in analytic tasks. Therefore, data cleaning is an unavoidable task in data preparation to have reliable data for final applications, such as querying and mining. Unfortunately, data cleaning is hard in practice and it requires a great amount of manual work. Several systems have been proposed to increase automation and scalability in the process. They rely on a formal, declarative approach based on first order logic: users provide high-level specifications of their tasks, and the systems compute optimal solutions without human intervention on the generated code. However, traditional "top-down" cleaning approaches quickly become unpractical when dealing with the complexity and variety found in big data.
In this talk, we first describe recent results in tackling data cleaning with a declarative approach. We then discuss how this experience has pushed several groups to propose new systems that recognize the central role of the users in cleaning big data.
Paolo Papotti is an Assistant Professor of Computer Science in the School of Computing, Informatics, and Decision Systems Engineering (CIDSE) at Arizona State University. He got his Ph.D. in Computer Science at Universita’ degli Studi Roma Tre (2007, Italy) and before joining ASU he had been a senior scientist at Qatar Computing Research Institute.
His research is focused on systems that assist users in complex, necessary tasks and that scale to large datasets with efficient algorithms and distributed platforms. His work has been recognized with two "Best of the Conference" citations (SIGMOD 2009, VLDB 2015) and with a best demo award at SIGMOD 2015. He is group leader for SIGMOD 2016 and associate editor for the ACM Journal of Data and Information Quality (JDIQ).
Resources such as Web pages or videos that are published in the Internet are referred to by their Uniform Resource Locator (URL). If a user accesses a resource via its URL, the host name part of the URL needs to be translated into a routable IP address. This translation is performed by the Domain Name System service (DNS). DNS also plays an important role when Content Distribution Networks (CDNs) are used to host replicas of popular objects on multiple servers that are located in geographically different areas
A CDN makes use of the DNS service to infer client location and direct the client request to optimal server. While most Internet Service Providers (ISPs) offer a DNS service to their customers, clients may instead use a public DNS service. The choice of the DNS service can impact the performance of clients when retrieving a resource from a given CDN. In this paper we study the impact on download performance for clients using either the DNS service of their ISP or the public DNS service provided by Google DNS. We adopt a causal approach that exposes the structural dependencies of the different parameters impacted by the DNS service used and we show how to model these dependencies with a Bayesian network. The Bayesian network allows us to explain and quantify the performance benefits seen by clients when using the DNS service of their ISP. We also discuss how the further improve client performance. Joint work with
Hadrien Hours, Patrick Loiseau, Alessandro Finamore and Marco Mellia
Wi-Fi is the preferred way of accessing the internet for many devices at home, but it is vulnerable to performance problems due to sharing an unlicensed medium. In this work, we propose a method to estimate the link capacity of a Wi-Fi link, using physical layer metrics that can be passively sampled on commodity access points. We build a model that predicts the maximum UDP throughput a device can sustain, extending previous models that do not consider IEEE 802.1n optimizations such as frame aggregation. We validate our method through controlled experiments in an anechoic chamber and artificially create different link quality conditions. We estimate the link capacity using our method and compare it to the state of the art. Over 95% of the link capacity predictions present errors below 5% when using our method with reference data. We show how the link capacity estimation enables Wi-Fi diagnosis in two case studies where we predict the available bandwidth under microwave interference and in an office environment.
Cloud-Radio Access Network (C-RAN) is a new emerging technology that holds alluring promises for Mobile network operators regarding capital and operation cost savings. However, many challenges still remain before full commercial deployment of C-RAN solutions. Dynamic resource allocation algorithms are needed to cope with significantly fluctuating traffic loads. Those algorithms must target not only a better quality of service delivery for users, but also less power consumption and better interference management, with the possibility to turn off RRHs that are not transmitting. To this end, we propose a dynamic two-stage design for downlink OFDMA resource allocation and BBU-RRH assignment in C-RAN. Simulation results show that our proposal achieves not only a high satisfaction rate for mobile users, but also minimal power consumption and significant BBUs savings, compared to state-of-the-art schemes.
Yazid Lyazidi received his Diploma in Computer Science and Telecommunications Engineering from INPT, Rabat, Morocco, in 2014 and the M.S. degree in Advanced Wireless Systems from SUPELEC, Gif sur Yvette, France the same year. He is now a PhD candidate in LIP6, University Pierre and Marie Curie, Paris, under the advisory of Prof. Rami Langar and Dr. Nadjib AITSAADI. His research topics include energy minimization and resource management in Cloud RAN.
Dynamic adaptive HTTP (DASH) based streaming is steadily becoming the most popular online video streaming technique. DASH streaming provides seamless playback by adapting the video quality to the network conditions during the video playback. A DASH server supports adaptive streaming by hosting multiple representations of the video and each representation is divided into small segments of equal playback duration. At the client end, the video player uses an adaptive bitrate selection (ABR) algorithm to decide the bitrate to be selected for each segment depending on the current network conditions. Currently proposed ABR algorithms ignore the fact that the segment sizes significantly vary for a given video bitrate. Due to this, even though an ABR algorithm is able to measure the network bandwidth, it may fail to predict the time to download the next segment. In this work, we propose a segment-aware rate adaptation (SARA) algorithm that considers the segment size variation in addition to the estimated path bandwidth and the current buffer occupancy to accurately predict the time required to download the next segment. Our results show that SARA provides a significant gain over the basic algorithm in the video quality delivered, without noticeably impacting the video switching rates.
Deep Medhi is Curators’ Professor in the Department of Computer Science and Electrical Engineering at the University of Missouri- Kansas City, USA. He received B.Sc. in Mathematics from Cotton College, Gauhati University, India, M.Sc. in Mathematics from the University of Delhi, India, and his Ph.D. in Computer Sciences from the University of Wisconsin-Madison, USA. Prior to joining UMKC in 1989, he was a member of the technical staff at AT&T Bell Laboratories. He was an invited visiting professor at the Technical University of Denmark, a visiting research fellow at Lund Institute of Technology, Sweden, a research visitor at University of Campinas, Brazil under the Brazilian Science Mobility Program and served as a Fulbright Senior Specialist. He is the Editor-in-Chief of Springers Journal of Network and Systems Management, and is on the editorial board of IEEE/ACM Transactions on Networking, IEEE Transactions on Network and Service Management, and IEEE Communications Surveys & Tutorials. He is co-author of the books, Routing, Flow, and Capacity Design in Communication and Computer Networks (2004) and Network Routing: Algorithms, Protocols, and Architectures (2007), both published by Morgan Kauffman/Elsevier.
Non-orthogonal multiple access (NOMA) is a promising candidate for wireless access in 5G cellular systems, and has recently received considerable attentions from both academia and industry. In a NOMA system, multiple users share the same carrier frequency at the same time, and their messages are decoded via successive interference cancellation (SIC). While the idea of SIC was proposed a long time ago, it is now regarded as a practical solution to wireless networks. For SIC to work properly, the signal-to-interference ratio of each user needs to be high enough for successful decoding and subsequent cancellation. To meet such a requirement, transmitter power control becomes indispensable. This talk focuses on power control for inter-cell interference management in NOMA systems. The classical power control results based on Perron-Frobenius theory of non-negative matrices will be reviewed, and how they can be applied to multi-cell NOMA systems will be presented. For practical application to 5G systems, it is desirable to have distributed algorithms for power control, which will also be discussed.
Dr. Chi Wan Sung is an Associate Professor in the Department of Electronic Engineering at City University of Hong Kong. He received the BEng, MPhil, and PhD degrees in information engineering from the Chinese University of Hong Kong in 1993, 1995, and 1998, respectively. He has served as editor for the ETRI journal and for the Transactions on Emerging Telecommunications Technologies. His research interests include power control and resource allocation, cooperative communications, network coding, and distributed storage systems.
The Internet is made of almost 50,000 ASes exchanging routing information
thanks to BGP. Inside each AS, information is redistributed via iBGP sessions.
This allows each router to map a destination exterior to the AS with a given
egress point. The main redistribution mechanisms used today, (iBGP full mesh, Route
Reflectors and BGP confederations), either guarantee selection of the best
egress point or enhance scalability, but not both. In this paper, we propose a new way
to perform iBGP redistribution in an AS based on its IGP topology, conciliating
optimality in route selection and scalability. Our contribution is threefold.
First, we demonstrate the tractability of our approach and its benefits.
Second, we provide an open-source implementation of our mechanism based on
Quagga. Third, we illustrate the feasibility of our approach through
simulations performed under ns-3 and compare its performance with full mesh
and Route Reflection. Joint work with Anthony Lambert (Orange Labs) Steve Uhlig (Queen Mary University of London), to appear at IEEE INFOCOM 2016,
Software Defined Networking paves the way for new services that enable better utilization of network resources. Bandwidth Calendaring (BWC) is a typical such example that exploits knowledge about future traffic, to optimally pack the arising demands over the network. We consider a generic BWC instance, where a network operator has to accommodate at minimum cost demands of predetermined, but time-varying, bandwidth requirements that can be scheduled within a specific time window. By exploiting the structure of the problem, we propose low complexity methods for the decomposition of the original Mixed integer problem into simpler ones. Our numerical results reveal that the proposed solution approach is near-optimal and outperforms standard methods based on linear programming relaxations and randomized rounding by more than 20%.
Lazaros Gkatzikis (S09M13) obtained the Ph.D. degree in computer engineering and communications from the University of Thessaly, Volos, Greece. Currently, he is a Research Staff Member at Huawei France Research Center, Paris, France. In the fall of 2011, he was a Research Intern at the Technicolor Paris Research Laboratory. He has been a Postdoctoral Researcher in the team of Prof. L. Tassiulas in Volos, Greece (2013) and at the KTH Royal Institute of Technology, Stockholm, Sweden (2014) working with Associate Prof. C. Fischione. His research interests include network optimization, mechanism design, and network performance analysis.
Telco operators view network densification as a viable solution for the challenging goals set for the next generation cellular networks. Among other goals, network densification would help accomodating the always increasing mobile demand or would allow to considerably reduce connection latency. Nevertheless, along with network densification, many issues arise. For example, sever inter-cell interference (ICI) may considerably limit network capacity if coordination among base stations is not used. To tackle such a problem, we focus on the well-known Almost Blank Sub-Frame (ABS or ABSF) solution. With ABSF, in order to reduce interference, not all the base stations are allowed to transmit at the same time. In this talk, we question the ability of ABSF of improving both aggregate system throughput and transmission efficiency and we show how ABSF may purse some other different objectives. In contrast, we show that a better way of improving system throughput is enabling Device-to-Device communications among UEs and opportunistic forwarding. Therefore, we propose a novel mechanism (OBS) that exploits and coordinates simultaneously ABSF and opportunistic forwarding to improve system throughput and user fairness. Our approach has been validated against state of the art approaches through an extensive simulation campaign, even using real data from a network operator.
We consider a restless multi-armed bandit in which each arm
can be in one two states. When an arm is sampled, the state of the arm
is not available to the sampler. Instead, a binary signal with a known
randomness that depends on the state of the arm is made available. No
signal is displayed if the arm is not sampled. An arm-dependent reward is
accrued from each sampling. In each time step, each arm changes state
according to known transition probabilities which in turn depend on
whether the arm is sampled or not sampled. Since the state of the arm is
never visible and has to be inferred from the current belief and a
possible binary signal, we call this the hidden Markov bandit. Our
interest is in a policy to select the arm(s) in each time step that
maximises the infinite horizon discounted reward. Specifically, we seek
the use of Whittles index in selecting the arms.
We first analyze the single-armed bandit and show that it admits an
approximate threshold-type optimal policy when the no-sample action
is subsidised. Next, we show that this also satisfies an approximate-
indexability property. Numerical examples support the analytical results
Computational Causal Discovery aims to induce causal models,
causal networks, and causal relations from observational data without
performing or by performing only few interventions (perbutations,
manipulations) of a system. While predictive analytics create models that
predict customer behavior for example, causal analytics create models that
dictate how to affect customer behavior. A recent approach to causal
discovery, which we call logic-based integrative causal discovery, will be
presented. This approach is more robust to statistical errors, makes more
realistic and less restrictive assumptions (e.g., admits latent confounding
factors and selection bias in the data) and accepts and reasons with
multiple heterogeneous datasets that are obtained under different sampling
criteria, different experimental conditions (perbubations, interventions),
and measuring different quantities (variables). The approach significantly
extends causal discovery based on Bayesian Networks, the simplest causal
model available, and is much more suitable for real business or scientific
data analysis.
Prof. Ioannis Tsamardinos is Associate Professor at the Computer
Science Department of University of Crete and co-founder of Gnosis Data
Analysis IKE. Prof. Tsamardinos has over 70 publications in international
journals, conferences, and books. He has participated in several national,
EU, and US funded research projects. Distinctions with colleagues and
students include the best performance in one of the four tasks in the
recent First Causality Challenge Competition, ISMB 2005 Best Poster Winner,
a Gold Medal in the Student Paper Competition in MEDINFO 2004, the
Outstanding Student Paper Award in AIPS 2000, the NASA Group Achievement
Award for participation in the Remote Agent team and others. He is a
regular reviewer for some of the leading Machine Learning journals and
conferences. Statistics on recognition of work include more than 4500
citations, and h-index of 28 (as estimated by the Publish or Perish tool).
Prof. Tsamardinos has recently been awarded the European and Greek national
grants of excellence, the ERC Consolidator and the ARISTEIA II grants
respectively (the equivalent of the NSF Young Investigator Award). Prof.
Tsamardinos has pioneered the integrative causal analysis and the
logic-based approach to causal discovery. The ERC grant in particular
regards the development of novel integrative causal discovery methods and
their application to biological mass cytometry data.
Lattice network coding is employed in physical-layer network coding, with applications to two-way relay networks and multiple-access channels. The receiver wants to compute a linear combination of the source symbols, which are drawn from a finite alphabet equipped with some algebraic structure. In this talk, we construct lattice network codes from lattices such as the E8 lattice and the Barnes-Wall lattice, which have the best known packing density and shaping gain in dimension 8 and 16, respectively. Simulation results demonstrate that there is a significant performance gain in comparison to the baseline lattice network code with hyper-cube shaping.
Kenneth Shum received the B.Eng. degree in Information Engineering from the Chinese University of Hong Kong in 1993, and the M.S. and Ph.D. degree in Electrical Engineering from University of Southern California in 1995 and 2000 respectively. He is now a research fellow in the Institute of Network Coding, CUHK. His research interests include information theory, coding theory and cooperative communication in wireless network. He is also a member of Composers and Authors Socity of Hong Kong (CASH) and has Erdos number 2.
HTTP is one crucial element of nowadays Internet, almost to the point of being nowadays that “thin waits” that used to be identified with the IP protocol not so long ago. Yet HTTP is changing, with recent proposals such as SPDY, HTTP2 and QUIC, that aims at addressing some of the long-standing shortcoming of the HTTP/1 protocol family.
In this talk, we will be exposing ongoing work to assess the impact that HTTP evolution is expected to have on
the quality of user experience (QoE), based on experiments where we collect feedback (i.e., Mean Opinion Scores) from a panel of users on real webpages, as well as define and automatically collect QoE metrics on the same experiments.
We consider a centralized content delivery infrastructure where a large number of storage-intensive files are replicated across several collocated servers. To achieve scalable delays in file downloads under stochastic loads, we allow multiple servers to work together as a pooled resource to meet individual download requests. In such systems important questions include: How and where to replicate files; How significant are the gains of resource pooling over policies which use single server per request; What are the tradeoffs among conflicting metrics such as delays, reliability and recovery costs, and power; How robust is performance to heterogeneity and choice of fairness criterion; etc.
In this talk we provide a simple performance model for large systems towards addressing these basic questions. For large systems where the overall system load is proportional to the number of servers, we establish scaling laws among delays, system load, number of file replicas, demand heterogeneity, power, and network capacity.
We approach the problem of computing geometric centralities, such as closeness and harmonic centrality, on very large graphs;
traditionally this task requires an all-pairs shortest-path computation in the exact case, or a number of breadth-first traversals
for approximated computations, but these techniques yield very weak statistical guarantees on highly disconnected graphs. We rather
assume that the graph is accessed in a semi-streaming fashion, that is, that adjacency lists are scanned almost sequentially, and that
a very small amount of memory (in the order of a dozen bytes) per node is available in core memory. We leverage the newly discovered
algorithms based on HyperLogLog counters, making it possible to approximate a number of geometric centralities at a very high speed
and with high accuracy. While the application of similar algorithms for the approximation of closeness was attempted in the MapReduce
framework, our exploitation of HyperLogLog counters reduces exponentially the memory footprint, paving the way for in-core processing
of networks with a hundred billion nodes using just 2TiB of RAM. Moreover, the computations we describe are inherently parallelizable,
and scale linearly with the number of available cores. Another application of the same framework is the computation of the distance distribution, and indeed we were able to use our algorithms to compute that Facebook has only four degrees of separation.
Sebastiano Vigna’s research focuses on the interaction between theory and practice. He has worked on highly
theoretical topics such as computability on the reals,distributed computability, self-stabilization, minimal
perfect hashing, succinct data structures, query recommendation, algorithms for large graphs, pseudorandom
number generation, theoretical/experimental analysis of spectral rankings such as PageRank, and axiomatization
of centrality measures, but he is also (co)author of several widely used software tools ranging from high-performance
Java libraries to a model-driven software generator, a search engine, a crawler, a text editor and a graph
compression framework. In 2011 he collaborated to the computation the distance distribution of the whole Facebook graph,
from which it was possible to evince that on Facebook there are just 3.74 degrees of separation. Recently, he participated
to the analysis of the largest available public web crawl (Common Crawl 2012), which led to the publication of the first
open ranking of web sites (http://wwwranking.webdatacommons.org/). His work on Elias-Fano coding and quasi-succinct
indices is at the basis of the code of Facebook’s "folly" library (https://github.com/facebook/folly/blob/master/folly/experimental/EliasFanoCoding.h).
He also collaborated to the first open ranking of Wikipedia pages (http://wikirank.di.unimi.it/), which is based on his body of work on centrality
in networks. His pseudorandom number generator xorshift128+ is currently used by the JavaScript engine V8 of Chrome, as well by Safari and Firefox,
and it is the stock generator of the Erlang language. Sebastiano Vigna obtained his PhD in Computer Science from the Universita’ degli
Studi di Milano, where he is currently an Associate Professor.
Large scale deployments of general cache networks,
such as Content Delivery Networks or Information Centric
Networking architectures, arise new challenges regarding their
performance prediction and network planning. Analytical models
and MonteCarlo approaches are already available to the scientific
community. However, complex interactions between replacement,
replication, and routing on arbitrary topologies make these approaches
hardly configurable. Additionally, huge content catalogs
and large networks sizes add non trivial scalability problems,
making their solution computationally demanding.
We propose a new technique for the performance evaluation of
large scale caching systems that intelligently integrates elements
of stochastic analysis within a MonteCarlo approach. Our method
leverages the intuition that the behavior of realistic networks of
caches, being them LRU or even more complex caches, can be
well represented by means of much simpler Time-To-Live (TTL)-
based caches. This TTL can be either set with the guidance of a simple
yet accurate stochastic model (e.g., the characteristic time of the Che approximation), or can be provided as very rough guesses, that are iteratively corrected by a feedback loop to ensure convergence.
Through a thorough validation campaign, we show that the
synergy between modeling and MonteCarlo approaches has
noticeable potentials both in accurately predicting steady state
performance metrics within 2% accuracy, while significantly
scaling down simulation time and memory requirements of large
scale scenarios by to two orders of magnitude. Furthermore, we
demonstrate the flexibility and efficiency of our hybrid approach
in simplifying fine-grained analyses of dynamic scenarios.
Graphs are used to represent a plethora of phenomena, from the Web and social networks, to biological pathways, to semantic knowledge bases. Arguably the most interesting and important questions one can ask about graphs have to do with their evolution, such as which Web pages are showing an increasing popularity trend; or how does influence propagate in social networks, how does knowledge evolve, etc. In this talk I will present Portal, a declarative language for efficient querying and exploratory analysis of evolving graphs. I will describe an implementation of Portal in scope of Apache Spark, an open-source distributed data processing framework, and will demonstrate that careful engineering can lead to good performance. Finally, I will describe our work on a visual query composer for Portal.
We describe in this talk the new possibilities offered by virtualization techniques in the design of 5G networks. We precisely introduce a convergent gateway realizing fixed/mobile convergence. Such a functional element is based on modules instantiated on virtual machines (or dockers), each module implementing specific tasks for convergence. A convergent gateway can be instantiated by a network operating system (GlobalOS) in charge of managing the network. One important component of GlobalOS is the orchestration of resources. We introduce some algorithms for resource orchestration in the framework of GlobalOS. We finally focus on the specific case of virtualized base band unit (BBU) functions.
Resource poverty is a fundamental constraint that severely limits the types of applications that can be run on mobile devices. This constraint is not just a temporary limitation of current technology but is intrinsic to mobility. In this talk, I will put forth a vision of the cloud that breaks free of this fundamental constraint. In this vision, mobile users seamlessly utilize nearby computers to obtain the resource benefits of cloud computing without incurring WAN delays and jitter. Rather than relying on a distant cloud the client connects and uses a nearby micro datacenter (mDC). Crisp interactive response for immersive applications that augment human cognition are then easier to achieve. While much engineering and research remains, the concepts and ideas introduced open the door to a new world of disaggregated computing in which seamless cognitive assistance for users can be delivered at any time and any place.
Victor Bahl is a Principal Researcher and the Director of the Mobility & Networking Research (MNR) group in Microsoft Research. MNR’s mission is to invent & research technologies that make Microsoft’s networks, services, and devices indispensable to the world. In addition to shepherding brilliant researchers, Victor helps shape Microsoft’s long-term vision related to networking technologies by advising the CEO and senior executive team, and by executing on this vision through research, technology transfers, and associated policy engagements with governments and industries around the world. He and his group have had far-reaching impact on the research community, government policy, and Microsoft products through numerous significant publications and technology transfers. His personal research spans a variety of topics in mobile & cloud computing, wireless systems & services, and datacenter & enterprise networking. Over his career he has built many highly cited seminal systems, published prolifically in top conferences and journals, authored 115 patents, given over three dozen keynote talks, won numerous prestigious awards and honors including ACM SIGMOBILE’s Lifetime Achievement Award and IEEE Outstanding Leadership and Professional Service Award. Victor received his PhD from the University of Massachusetts Amherst in 1997. He is a Fellow of the ACM, IEEE and AAAS.
Content caching is a fundamental building block of the Internet. Caches
are widely deployed at network edges to improve performance for
end-users, and to reduce load on web servers and the backbone network.
Considering mobile 3G/4G networks, however, the bottleneck is at the
access link, where bandwidth is shared among all mobile terminals. As
such, per-user capacity cannot grow to cope with the traffic demand.
Unfortunately, caching policies would not reduce the load on the
wireless link which would have to carry multiple copies of the same
object that is being downloaded by multiple mobile terminals sharing the
same access link.
In this paper we investigate if it is worth to push the caching paradigm
even farther. We hypothesize a system in which mobile terminals
implement a local cache, where popular content can be pushed/pre-staged.
This exploits the peculiar broadcast capability of the wireless channels
to replicate content "for free" on all terminals, saving the cost of
transmitting multiple copies of those popular objects. Relying on a
large data set collected from a European mobile carrier, we analyse the
content popularity characteristics of mobile traffic, and quantify the
benefit that the push-to-mobile system would produce. We found that
content pre-staging, by proactively and periodically broadcasting
"bundles" of popular objects to devices, allows to both greatly i)
improve users’ performance and ii) reduce up to 20% (40%) the downloaded
volume (number of requests) in optimistic scenarios with a bundle of 100
MB. However, some technical constraints and content characteristics
could question the actual gain such system would reach in practice..
We evaluate the performance perceived
by end-users connected to a backhaul link that
aggregates the traffic of multiple access areas. We model, at flow
level, the way a finite population of users with heterogeneous
access rates and traffic demands shares the capacity of this
common backhaul link. We then evaluate several practically
interesting use cases, focusing particularly on the performance of
users subscribing to recent FTTH offers in which the user access
rates are in the same order of magnitude as the backhaul link
capacity. We show that, despite such high access rates, reasonable
performance can be achieved as long as the total offered traffic is
well below the backhaul link capacity. The obtained performance
results are used to derive simple dimensioning guidelines.
Many open computing systems, for example grid and cloud computing, and ad hoc networks,
such as sensor or vehicular networks, face a similar problem: how to collectivise resources,
and distribute them fairly, in the absence of a centralized component. In this talk, we apply the
methodology of sociologically-inspired computing, in which the study of human (social) models
are formalised as the basis of engineering solutions to technical problems. In this case, we
present formal models of Ostrom’s design principles for self-governing institutions and
Rescher’s theory of distributive justice, for defining executable specifications of electronic
institutions which support fair and sustainable resource allocation in open computing systems.
We will also discuss some ramifications of this research: in particular the implications of
unrestricted self-modification of mutable rules for the design of adaptive systems, and
the potential of such systems for reasoning about resource allocation in socio-technical
systems, where computational intelligence operates on behalf of (or in consort with)
human intelligence. This is the basis for a programme of research we call
computational justice: capturing some notions of ‘correctness’ in the outcomes
of algorithmic decision-making as for basis self-governance in socio-technical systems.
Jeremy Pitt is Reader in Intelligent Systems in the Department of Electrical & Electronic Engineering at Imperial College London, where he is also Deputy Head of the Intelligent Systems & Networks Group. His research interests focus on developing formal models of social processes using computational logic, and their application to multi-agent systems, for example in agent societies, agent communication languages, and self-organising electronic institutions. He also has an strong interest in the social impact of technology, and has edited two recent books, This Pervasive Day (IC Press, 2012) and The Computer After Me (IC Press, 2014). He has been an investigator on more than 30 national and European research projects and has published more than 150 articles in journals and conferences. He is a Senior Member of the ACM, a Fellow of the BCS, and a Fellow of the IET; he is also an Associate Editor of ACM Transactions on Autonomous and Adaptive Systems and an Associate Editor of IEEE Technology and Society Magazine.
The Internet heavily relies on Content Distribution Networks and
transparent caches to cope with the ever-increasing traffic demand of
users. Content, however, is essentially versatile: once published at a
given time, its popularity vanishes over time. All requests for a given
document are then concentrated between the publishing time and an
effective perishing time.
In this paper, we propose a new model for the arrival of content
requests, which takes into account the dynamical nature of the content
catalog. Based on two large traffic traces collected on the Orange
network, we use the semi-experimental method and determine invariants of
the content request process. This allows us to define a simple
mathematical model for content requests; by extending the so-called
Che approximation, we then compute the performance of a LRU cache fed
with such a request process, expressed by its hit ratio. We numerically
validate the good accuracy of our model by comparison to trace-based
simulation.
Joint work with Felipe Olmos, Alain Simonian and Yannick Carlinet
The video part of Internet traffic is booming due to the ever increasing
availability of video content from sites
such as YouTube (video sharing), Netflix (movie on demand), Livestream
(live streaming).
Adaptive video streaming systems dynamically change the video content
bitrate and resolution to match the
network available bandwidth and user screen resolution.
The mainstream approach employed by adaptive streaming systems, employed by
Netflix and Youtube, is to
use two controllers: one is the stream-switching algorithm that selects the
video level, the other regulates the
playout buffer length to a set-point.
In this talk we show that such an approach is affected by a fundamental
drawback: the video flows are not able to get the
best possible video quality when competing with TCP flow. We then design an
adaptive video streaming system which fixes
this issue by only using one controller. An experimental evaluation shows
the benefits of the proposed control system.
In this talk we present new results on the stochastic bandit problems with a continuum set of arms and where the expected reward is a continuous and unimodal function of the arm. Our setting for instance includes the problem considered in (Cope, 2009) and (Yu, 2011). No assumption beyond unimodality is made regarding the smoothness and the structure of the expected reward function. Our first result is an impossiblity result: without knowledge of the smoothness of the reward function, there exists no stochastic equivalent to Kiefer’s golden section search (Kiefer, 1953). Further, we propose Stochastic Pentachotomy (SP), an algorithm for which we derive finite-time regret upper bounds. In particular, we show that, for any expected reward function μthat behaves as μ(x)=μ(x^⋆)-C|x-x^⋆|^ξlocally around its maximizer x^⋆for some ξ, C>0, the SP algorithm is order-optimal, i.e., its regret scales as O(\sqrtT\log(T)) when the time horizon T grows large. This regret scaling is achieved without the knowledge of ξand C. Our algorithm is based on asymptotically optimal sequential statistical tests used to successively prune an interval that contains the best arm with high probability. To our knowledge, the SP algorithm constitutes the first sequential arm selection rule that achieves a regret scaling as O(\sqrtT) up to a logarithmic factor for non-smooth expected reward functions, as well as for smooth functions with unknown smoothness.
This is a joint work with Alexandre Proutiere available at : http://arxiv.org/abs/1406.7447
Bio: Richard Combes received the Engineering degree from Telecom ParisTech (2008), the Master’s degree in mathematics from university of Paris VII (2009) and the Ph.D. degree in mathematics from university of Paris VI (2012). He was a visiting scientist in INRIA (2012) and a post-doc in KTH (2013). He is currently an Assistant Professor in Supelec. He received the best paper award at CNSM 2011. His current research interests include communication networks, stochastic systems and their control and machine learning.
In this work we analyze the estimation of the spatial reuse of a wireless
network equiped with
a Medium Access Control (MAC) mechanism based on 802.11’s Distributed
Coordination Function (DCF). Moreover we assume that the RTS/CTS handshake
is enabled in order to avoid the hidden node problem.
Our analysis is based on the definition of Parking Process over a random
graph (the interference graph of the network) for which only mild
assumptions over the distribution of the nodes’s degree are assumed. Then a
large graph limit when the number of nodes goes to infinity is proved which
results in a system of differential equations representing the evolution
of the nodes’degree. From the solution of this system we can calculate the
spatial reuse (or equivalently the jamming constant associated to the
parking process).
Joint work of Paola Bermolen, Matthieu Jonckheere, Federico Larroca and
Pascal Moyal
Simplicial complex representation gives a mathematical description of
the topology of a wireless sensor network, i.e., its connectivity and
coverage. In these networks, sensors are randomly deployed in bulk in
order to ensure perfect connectivity and coverage. We propose an
algorithm to discover which sensors are to be switched off, without
modification of the topology, in order to reduce energy consumption.
Our reduction algorithm can be applied to any type of simplicial
complex and reaches an optimal solution.
In a second part, we consider a damaged wireless network, presenting
coverage holes or disconnected components, we
propose a disaster recovery algorithm repairing the network. It
provides the list of locations where to put new nodes to
patch the coverage holes and mend the disconnected components. In
order to do this we first consider the simplicial complex
representation of the network, then the algorithm adds supplementary
nodes in excessive number, and afterwards runs the reduction algorithm
in order to reach a unimprovable result. We use a determinantal point
process for the addition of nodes: the Ginibre point process which has
inherent repulsion between vertices.
Today, Internet users are mostly interested to consume content, information and services independently from the servers where these are located.
Information Centric Networking (ICN) is a recent networking vein which proposes to enrich the network layer with name-based forwarding, a novel communication primitive centered around content identifiers rather than their location.
Despite several name-based forwarding strategies have been proposed, few have attempted to build a content router. Our work fills such gap by designing and prototyping Caesar, a content router for high-speed forwarding on content names.
Caesar has several innovative features: (i) a longest prefix matching algorithm that efficiently supports content names, (ii) an incremental design which allows for easy integration with existing network equipments, (iii) support for packet processing offload to graphics processing units (GPUs), and (iv) a forwarding engine distributed across multiple line cards. We build Caesar as a small scale router, and show that it sustains up to 10 Gbps links and over 10 million content prefixes.
In addition, GPU offload further speeds up the forwarding rate by an order of magnitude, while distributed forwarding augments the amount of content prefixes served linearly with the number of line cards, with a small penalty in terms of packet processing latency.
The full design of a content router also includes a Pending Interest Table and a Content Store. The former is a data-structure used to store pending requests not served yet, while the latter is a packet-level cache used to temporary store forwarded data to serve future requests.
We will discuss current and future work on the integration of Pending Interest Table and Content Store in our content router.
The Web became a virtual place where persons and software interact in hybrid communities. These large scale interactions create many problems in particular the one of reconciling formal semantics of computer science (e.g. logics, ontologies, typing systems, etc.) on which the Web architecture is built, with soft semantics of people (e.g. posts, tags, status, etc.) on which the Web content is built. The Wimmics research lab studies methods, models and algorithms to bridge formal semantics and social semantics on the Web. We address this problem focusing on typed graphs formalisms to model and capture these different pieces of knowledge and hybrid operators to process them jointly. This talk will present some of our results and their applications in several projects.
Cellular operators count on the potentials of offloading techniques to relieve their overloaded data channels. Beyond standard access point-based offloading strategies, a promising alternative is to exploit opportunistic direct communication links between mobile devices. Nevertheless, achieving efficient device-to-device offloading is challenging, as communication opportunities are, by nature, dependent on individual mobility patterns. We propose, design, and evaluate DROiD (Derivative Re-injection to Offload Data), an original method to finely control the distribution of popular contents throughout a mobile network. The idea is to use the infrastructure resources as seldom as possible. To this end, DROiD injects copies through the infrastructure only when needed: (i) at the beginning, in order to trigger the dissemination, (ii) if the evolution of the opportunistic dissemination is below some expected pace, and (iii) when the delivery delay is about to expire, in order to guarantee 100% diffusion. Our strategy is particularly effective in highly dynamic scenarios, where sudden creation and dissolution of clusters of mobile nodes prevent contents to diffuse properly. We assess the performance of DROiD by simulating a traffic information service on a realistic large-scale vehicular dataset composed of more than 10,000 nodes.
DROiD substantially outperforms other offloading strategies, saving more than 50% of the infrastructure traffic even in the case of tight delivery delay constraints. DROiD allows terminal-to-terminal offloading of data with very short maximum reception delay, in the order of minutes, which is a realistic bound for cellular user acceptance.
Web content coming from outside the ISP is today skyrocketing, causing
significant additional infrastructure costs to network operators. The
reduced marginal revenues left to ISPs, whose business is almost
entirely based on declining flat rate subscriptions, call for
significant innovation within the network infrastructure, to support new
service delivery.
In this paper, we suggest the use of micro CDNs in ISP access and
back-haul networks to reduce redundant web traffic within the ISP
infrastructure while improving user’s QoS.
With micro CDN we refer to a content delivery system composed of (i) a
high speed caching substrate, (ii) a content based routing protocol and
(iii) a set of data transfer mechanisms made available by
content-centric networking.
The contribution of this paper is twofold. First, we extensively analyze
more than one month of web traffic via continuous monitoring between the
access and back-haul network of Orange in France. Second, we
characterize key properties of monitored traffic, such as content
popularity and request cacheability, to infer potential traffic
reduction enabled by the introduction of micro CDNs.
Based on these findings, we then perform micro CDN dimensioning in terms
of memory requirements and provide guidelines on design choices.
We propose a unified methodology to analyse the
performance of caches (both isolated and interconnected), by
extending and generalizing a decoupling technique originally known
as Che’s approximation, which provides very accurate results at low
computational cost.We consider several caching policies, taking into
account the effects of temporal locality. In the case of interconnected
caches, our approach allows us to do better than the Poisson
approximation commonly adopted in prior work. Our results,
validated against simulations and trace-driven experiments, provide
interesting insights into the performance of caching systems. Joint work with Valentina Martina and Michele Garetto, appeared at IEEE INFOCOM’14
The adoption of Service Oriented Architecture (SOA) and semantic Web technologies in the Internet of Things (IoT) enables to enhance the interoperability of devices by abstracting their capabilities as services and to enrich their descriptions with machine-interpretable semantics. This facilitates the discovery and composition of IoT services. The increasing number of IoT services, their dynamicity and geographical distribution require mechanisms to enable scalable and efficient discovery. We propose, in this paper, a semantic based IoT service discovery system that supports and adapts to the dynamicity of IoT services. The discovery is distributed over a hierarchy of semantic gateways. Within a semantic gateway, we implement mechanisms to dynamically organize its content over time, in
order to minimize the discovery cost. Results show that our approach enables to maintain a scalable and efficient discovery
and limits the number of updates sent to a neighboring gateway.
We formulate optimization problems to study how data centers might modulate their power demands for cost-effective operation taking into account various complexities exhibited by real-world electricity pricing schemes. For computational tractability reasons, we work with a fluid model for power demands which we imagine can be modulated using two abstract knobs of demand dropping and demand delaying (each with its associated penalties or costs). We consider both stochastically known and completely unknown inputs, which are likely to capture different data center scenarios. Using empirical evaluation with both real-world and synthetic power demands and real-world prices, we demonstrate the efficacy of our techniques. Work in collaboration with B. Urgaonkar, G. Kesidis, U. Shanbhag, Q. Wang, A. Sivasbramaniam
Organic Computing has emerged almost 10 years ago as a challenging vision for future in-formation processing systems, based on the insight that already in the near future we will be surrounded by large collections of autonomous systems equipped with sensors and actuators to be aware of their environment, to communicate freely, and to organize themselves. The presence of networks of intelligent systems in our environment opens fascinating application areas but, at the same time, bears the problem of their controllability. Hence, we have to construct these systems - which we increasingly depend on - as robust, safe, flexible, and trustworthy as possible. In order to achieve these goals, our technical systems will have to act more independently, flexibly, and autonomously, i.e. they will have to exhibit life-like properties. We call those systems organic. Hence, an Organic Computing System is a technical system, which adapts dynamically to the current conditions of its environment. It will be self-organizing, self-configuring, self-healing, self-protecting, self-explaining, and context-aware.
First steps towards adaptive and self-organizing computer systems have already been undertaken. Adaptivity, reconfigurability, emergence of new properties, and self-organisation are topics in a variety of research projects. From 2005 until 2011 the German Science Foundation (DFG) has funded a priority research program on Organic Computing. It has addressed fundamental challenges in the design of complex computing systems; its objective was a deeper understanding of emergent global behaviour in self-organising systems and the design of specific concepts and tools to support the construction of Organic Computing systems for technical applications.
This presentation will briefly recapitulate the basic motivation for Organic Computing, explain key concepts, and illustrate these concepts with some project examples. We will then look into possible future directions of OC research concentrating on (1) Online optimization and (2) Social Organic Computing.
Christian Muller-Schloer studied EE at the Technical University of Munich and received the Diploma degree in 1975, the Ph. D. in semiconductor physics in 1977. In the same year he joined Siemens Corporate Technology where he worked in a variety of research fields, among them CAD for communication systems, cryptography, simulation accelerators and RISC architectures.
From 1980 until 1982 he was a member of the Siemens research labs in Princeton, NJ, U.S.A. In 1991 he was appointed full professor of computer architecture and operating systems at the University of Hannover. His institute, later renamed to Institute of Systems Engineering ? System and Computer Architecture, engaged in systems level research such as system design and simulation, embedded systems, virtual prototyping, educational technology and, since 2001, adaptive and self-organizing systems.
He is one of the founders of the German Organic Computing initiative, which was launched in 2003 with support of GI and itg, the two key professional societies for computer science in Germany. In 2005 he co-initiated the Special Priority Programme on Organic Computing of the German Research Foundation (DFG).
He is author of more than 170 papers and several books.
Present projects: predominantly in the area of Organic Computing - deal, among others, with topics like quantitative emergence and self-organization, organic traffic control, self-organizing trusted communities, and online-optimization for multi-arm robots.
Applications hosted within the datacenter often rely on distributed
services such as Zookeeper, Chubby, and Spanner for fault-tolerant
storage, distributed coordination, and transaction support. These
systems provide consistency and availability in the presence of
limited failures by relying on sophisticated distributed algorithms
such as state machine replication. Unfortunately, these distributed
algorithms are expensive, accrue additional latency, suffer from
bottlenecks, and are difficult to optimize. This state of affairs is
due to the fact that distributed systems are traditionally designed
independently from the underlying network and supporting protocols,
making worst-case assumptions (e.g., complete asynchrony) about its
behavior.
While this is reasonable for wide-area networks, many distributed
applications are however deployed in datacenters, where the network is
more reliable, predictable, and extensible. Our position is that
codesigning networks and distributed systems in order to operate under
an "approximately synchronous" execution model can have substantial
benefits in datacenter settings. We will illustrate this using two
case studies in this talk: Speculative Paxos – a distributed
coordination service for datacenters that relies on the network to
exhibit approximately synchronous behavior in the normal case, while
still remaining correct if the network exhibits weaker properties, and
Optimistic Replicated Two-Phase Commit (OR-2PC) – a new distributed
transaction protocol that uses a new optimistic ordering technique,
based on loosely synchronized clocks in order to improve both
throughput and latency.
Security and privacy are fundamental concerns in today’s world. These concerns have become particularly prominent with Snowden’s revelations
of the presence of the NSA in our daily lives. These revelations have
shown that traditional cryptographic techniques do not provide as was expected. This has called into question how security and privacy can be provided. In this talk we investigate how randomness in the environment can be used to provide everlasting security and undetectability(privacy) in wireless communications. In the first part of the talk we describe a practical way to harness this randomness to provide and improve the security of wireless communications. We introduce the notion of "dynamic secrets", information shared by two parties, Alice and Bob, engaged in communication and not available to an adversary, Eve. The basic idea is to dynamically generate a series of secrets from randomness present in the in wireless environment. These dynamic secrets exhibit interesting security properties and offer a robust alternative to cryptographic security protocols. We present a simple algorithm for generating these secrets and using them to ensure secrecy.
In some situations, Alice and Bob may want not only to secure their communications but to keep it private. In the second part of our talk we focus on the use of randomness to conceal the communications. Here the challenge is for Alice to communicate with Bob without an adversary, Willie the warden, ever realizing that the communication is taking place. Specifically, we establish that Alice can send O(t) bits (and no more) to Bob in time t over a variety of wireless and optical channels. Moreover, we report experimental results that corroborate the theory.
For many applications in computer vision, mobile robotics, cognitive systems, graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. This is a challenging task given that no one fully understands how the human visual system works. Modeling visual attention, particularly stimulus-driven, saliency-based attention from ground truth eye tracking data, has been a very active research area over the past 25 years. Many different models of attention and large eye tracking data sets are now available and should help future research. This presentation aims to provide a global view of the current state-of-the-art in visual attention modeling as well as some applications.
Nicolas Riche holds an Electrical Engineering degree from the University of Mons, Engineering Faculty (since June 2010). His master thesis was performed at the University of Montreal (UdM) and dealt with automatic analysis of the articulatory parameters for the production of piano timbre. In 2011, he obtained a FRIA grant for pursuing a PhD thesis about the implementation of a multimodal model of attention for real time applications.
The increasing mobile data demand and the proliferation of advanced handheld devices place the user-provided networks (UPNs) at a conspicuous position in next-generation network architectures. There has been growing consensus that UPNs can play a crucial role both in self-organizing and in operator-controlled wireless networks, as they enable the exploitation of the diverse communication needs and resources of different users. Today, many innovative startups such as Open Garden, M-87, and Karma, as well as major network operators such as Deutsche Telekom, Telefonica, Comcast, and China Mobile Hong Kong, propose or even implement such models. However, in UPNs both the availability and the demand for Internet access rely on user-owned network equipment. Therefore, the success of this type of networks depends on the participation of users. In this talk, we analyze the design challenges of incentive mechanisms for encouraging user engagement in user-provided networks. Motivated by recently launched business models, we focus on mobile UPNs where the energy consumption and data usage costs are critical, and have a large impact on users’ decisions both for requesting and offering UPN services. We outline two novel incentive schemes that have been recently proposed for such UPNs, and discuss the open issues that must be further addressed.
George Iosifidis holds an Engineering Degree in Telecommunications (Greek Air Force Academy, 2000), a M.Sc. and a Ph.D. degree in communication networks (University of Thessaly, 2007 and 2012). Currently, he is a post-doc researcher at CERTH/ITI and University of Thessaly, Volos. His research interests lie at the nexus of network optimization and network economics, with emphasis on spectrum economics, autonomous networks, small cell networks, user-centric networks and mobile data offloading. More information can be found at www.georgeiosifidis.net
We assume a space-time Poisson process of call arrivals on the infinite plane, independently marked by data volumes and served by a cellular network modeled by an infinite ergodic point process of base stations. Each point of this point process represents the location of a base station that applies a processor sharing policy to serve users arriving in its vicinity, modeled by the Voronoi cell, possibly perturbed by some random signal propagation effects.
Using ergodic arguments and the Palm theoretic formalism, we define a global mean user throughput in the cellular network and prove that it is equal to the ratio of mean traffic demand to the mean number of users in the steady state of the "typical cell" of the network. Here, both means account for double averaging: over time and network geometry, and can be related to the per-surface traffic demand, base-station density and the spatial distribution of the signal-to-interference-and-noise ratio. This latter accounts for network irregularities, shadowing and cell dependence via some cell-load equations.
We validate our approach comparing analytical and simulation results for Poisson network model to real-network measurements. Little’s law allows expressing the mean user throughput in any region of the network as the ratio of the mean traffic demand to the steady-state mean number of users in this region. Corresponding statistics are usually collected in operational networks for each cell.
There are known two "ALOHA-type" models, (a) with a finite number of queues and with messages ahead of each queue being potentially "active", and (b) without queues and all "active" messages. I’ll talk about model (b), introduce a number of new protocols for their stability, and discuss some open problems.
This talk asks how can we better design urban environments, leveraging networked technologies to facilitate rich engagement by its inhabitants. It discussed the recent history of Smart City design as a top-down application, the emergence of Civic Hackers, and the effect of ubiquitous computing on Information Communication Technologies.
Dr. Beth Coleman, director City as Platform Lab (Games Institute), and Co-director Critical Media Lab at the University of Waterloo, Ontario Canada http://www.cityasplatform.org. She is a Faculty Fellow at Berkman Center for Internet & Society, Harvard University. Coleman works with new technology and art to create transmedia forms of engagement. Her research addresses issues of network society, subjectivity, and the contemporary city, as well as philosophy of technology, critical race studies, and media design. She is the co-founder of SoundLab Cultural Alchemy, an internationally acclaimed multimedia performance platform. As an artist, she has a history of international exhibition including venues such as the Whitney Museum of American Art, New Museum of Contemporary Art, and Musee d’Art moderne Paris. She has worked with research consortiums that involve academic, industry and arts collaborators. She is a founding member of the Microsoft Research Fellow Social Media Collective and an expert consultant for the European Union Digital Futures Initiative. Her work has been presented as part of the TEDx East conference, on National Public Radio, and reviewed in The Guardian (UK) and Washington Post newspapers.She works internationally with collaborators in Africa, Europe, and Asia. Her book Hello Avatar is published by the MIT Press and was recently translated into Turkish.
Howard Goldkrand is an artist and innovations director who works on transmedia design, immersive engagement, and networked media strategy across industries. He is Head of Innvation Design for the global digital media creative agency SapientNitro (http://www.sapient.com/en-us/sapientnitro.html). In his recent work history, Goldkrand has been executive producer of a future of storytelling project for Conde Nast Ideactive. He was Innovations Director and Cultural Engineer at Modernista! where he lead the team for the Alternate Reality Game for the Dexter TV show (Showtime) for which he won a Webby award, best marketing piece at ComicCon, and nominated for an Emmy Award. He also led the work on TOMS with the launch of their eyewear product and continues to do innovation work for the company. From the perspective of strategy, creative, media, production and innovations, he has worked with clients that include GM, TIAA-CREF, Palm, Napster, Food Should Taste Good, Product(RED),TOMS, Nickelodeon, HTC, NIKE, Starbucks, SCION, Samsung. He continues to work with Neverstop, a group out of Seattle founded by Alex Calderwood of Ace Hotel fame. As an artist, he began an experimental studio practice at Wesleyan University, creating conceptual work that spanned sculpture and performance. He is the codirector of SoundLab Cultural Alchemy, an experimental electronic music and art platform. He has had a long international art career exhibiting at venues such as PS 1 MOMA, Chinati Foundation, Marfa, Texas, Recollets, Paris, Art Museum of Vancouver, Castil de Rivoli, Turin Italy, and the WAAG Society Amsterdam. Goldkrand speaks internationally at workshops, panels, and conferences on art and culture.
Consider an irreducible continuous time Markov chain with a finite or a countably infinite number of states and admitting a unique stationary probability distribution. The relative entropy of the distribution of the chain at any time with respect to the stationary distribution is a monotonically decreasing function of time. It is interesting to ask if this function is convex. We discuss this question for finite Markov chains and for Jackson networks, which are a class of countable state Markov chains of interest in modeling networks of queues. (Joint work with Varun Jog.)
We consider a smart grid based power-line communications network (PCN) which serves stations that represent subscriber devices and/or power company employed sensor nodes. The network is managed by a control (gateway) station that is attached to a power line and acts to supervise and control the sharing of the PCN medium by stations attached to the line. The gateway receives periodic updates from the stations, and collects data flows that are triggered by those stations that become active due to occurrence of exception events.
We design and study a cross-layer adaptive-rate networking mechanism that enables active stations to efficiently transport their message flows to the gateway. Our approach is based on the selection of certain stations to act as relay nodes (RNs), during certain transmission time phases. A scheduling protocol is derived, and its parameters optimized, in a manner that properly regulates transmissions by active stations and by relay nodes, in aiming to achieve high spatial reuse levels that induce high throughput rates while maintaining low message delays and low nodal energy consumption levels.
As a second topic, we will outline our latest results involving our new Vehicular Backbone Network (VBN) method for the design of a Vehicular Ad hoc Network (VANET). We derive an effective configuration of the network to attain high throughput and low delay performance behavior for multicasting message flows issued by a road side unit. The integration of such operation with cellular and WiFi systems is also invoked.
Izhak Rubin received the B.Sc. and M.Sc. from the Technion - Israel Institute of Technology, Haifa, Israel, and the Ph.D. degree from Princeton University, Princeton, NJ, all in Electrical Engineering. Since 1970, he has been on the faculty of the UCLA School of Engineering and Applied Science where he is currently a Distinguished Professor in the Electrical Engineering Department.
Dr. Rubin has had extensive research, publications, consulting, and industrial experience in the design and analysis of commercial and military computer communications and telecommunications systems and networks. Such design and analysis projects include network systems employed by the FAA for air traffic control, terrestrial and satellite based mobile wireless networks, high speed multimedia telecommunications networks, advanced cellular cross-layer operations, mobile backbone ad hoc wireless networks, mechanisms to assure network resiliency and automatic failover operations. At UCLA, he is leading a research group in the areas of telecommunications and computer communications networks. He serves as co-director of the UCLA Public Safety Network Systems Laboratory.
During 1979-1980, he served as Acting Chief Scientist of the Xerox Telecommunications Network. He served as co-chairman of the 1981 IEEE International Symposium on Information Theory; as program chairman of the 1984 NSF-UCLA workshop on Personal Communications; as program chairman for the 1987 IEEE INFOCOM conference; and as program co-chair of the IEEE 1993 workshop on Local and Metropolitan Area networks, as program co-chair of the 2002 first UCLA/ONR Symposium on Autonomous Intelligent Networked Systems (AINS), and has organized many other conferences and workshops. He has served as an editor of the IEEE Transactions on Communications, Wireless Networks journal, Optical Networks magazine, IEEE JSAC issue on MAC techniques, Communications Systems journal, Photonic Networks Communications journal, and has contributed chapters to texts and encyclopedia on telecommunications systems and networks. Dr. Rubin is a Life Fellow of IEEE.
Traffic engineering refers to the set of techniques used to dimension links and route traffic in IP networks. It relies most often on simplistic traffic models where packets arrive according to a Poisson process at each router, independently of the experienced delays and losses. In this talk, we revisit traffic engineering methods in the light of more realistic traffic models where data transfers are viewed as fluid flows sharing links in an elastic way, mimicking the congestion control algorithms of TCP. The corresponding queuing system is no longer a set of independent FIFO queues but a set of coupled processor-sharing queues. We show that minimizing the mean delay in this system tends to balance load more equally in the network.
The Tandem Jackson Networkis a system of n sites (queues)in series, where single particles (customers, jobs, packets, etc.) move, one by one anduni-directionally,from one site to the next until they leave the system. (Think, for example, on aproduction line, or on a line in a cafeteria).When each site is a M/M/1 queue, the Tandem Jackson Network is famous for its product-form solution of the multi-dimensional Probability Generating Function of the site occupancies.
In contrast, the Asymmetric Inclusion Process (ASIP) is a series of n Markovian queues (sites), each with unbounded capacity, but with unlimited-size batch service. That is, when service is completed at sitek,all particles present there move simultaneously to sitek+1,and form a cluster with the particles present in the latter site.
We analyze the ASIP and show that its multi-dimensional Probability Generating Function(PGF) does notposses a product-form solution. We then present a method to calculate this PGF. We further show that homogeneous systems are optimal and derive limit laws (when the number of sites becomes large) for various variables (e.g. busy period, draining time, etc.).
Considering the occupancies of the sites (queue sizes) we show that occupation probabilities in the ASIP obey a discrete two-dimensional boundary value problem. Solving this problem wefind explicit expressions for the probability that site k is occupied by m particles (m=0,1,2,..).Catalan’s numbersare shown to naturally arise in the context of these occupation probabilities.
This is a joint work with Shlomi Reuveni and Iddo Eiazar.
Locator(Loc)/identifier(ID) Separation(LIS) was conceived to mitigate the explosion of the DFZ routing table. To be literally exact, every host would be given a Loc in addition to an ID. The talk asserts that provision of Locs in this fashion does not help combat the problem at all; it would only inherit the same fate with the current IP address.
What would really help would be use of two-tier Locs, with one set local to a site and another globally relevant. That is to day, use of local addressing(Loc) would be the only exit. Some LIS proposals like ILNP and LISP achieve this by their own tricky definition of Locs. The talk asserts what they really implement is adoption of local addressing, rather than LIS.
In LISP, EID(endpoint ID) is also used for routing within a site, thus is semantically overloaded in the same way as IP address is. This also implies semantic overloading isn’t the real problem from the start, thus there’d be no rationale for LIS. Semantic overloading (for both identification and location) of an address is rather an intrinsic nature of networking. What people really need is local addressing.
If time permits, the talk also would suggest use of ISIS to make the best use of EID in LISP.
DY is a professor of CNU(Chungnam National University) in South Corea, in the department of Information Communications Engineering, since 1983. He got a bachelor degree from SNU(Seoul National University) and a MS and a PhD degree, both from KAIST(Korea Advanced Institute of Science and Technology), before joining CNU. He’s been working on various fields of networking, with recent focus on Future Internet. He’s also been active in standardization and REN(research and educational networking) activities, and is current Chair of ISO/IEC JTC 1/SC 6, where OSI once was made, and Chair of APAN(Asia-Pacific Advanced Network), a non-profit consortium similar to Internet 2 (US) and Terena (EU).
Janos Korner has been a Professor in Computer Science at "Sapienza" University of Rome since 1993. He obtained his Degree in Mathematics in 1970. From 1970 to 1992 he worked at the Mathematical Institute of the Hungarian Academy of Sciences. During these years he had two periods of leave: from 1981 to 1983 working at Bell Laboratories, Murray Hill, NJ, and for the academic year 1987-88 working at ENST, Paris, France. He is an Associate Editor of IEEE Trans. Information Theory. In 2010 he was elected to the Hungarian Academy of Sciences as an External Member. He received the Claude Shannon Award of the IEEE Information Theory Society for 2014.
Software-defined networking (SDN) is a novel paradigm that outsources the control of packet-forwarding switches to a set of software controllers. The most fundamental task of these controllers is the correct implementation of the network policy, i.e., the intended network behavior. In essence, such a policy specifies the rules by which packets must be forwarded across the network. We initiate the study of the SDN control plane as a distributed system.
We introduce a formal model describing the interaction between the data plane and a distributed control plane (consisting of a collection of fault-prone controllers). Then we formulate the problem of consistent composition of concurrent network policy updates. The composition is enabled via a transactional interface with all-or-nothing semantics, which allows us to reason about possibilities and impossibilities in controller synchronization.
I discuss the problem of detecting influential individuals in social
networks. Viral marketing campaigns seek to recruit a small number of
influential individuals who are able to cover the largest target
audience. Most of the literature assumes the network
is known, but usually there is no information about the topology. I
present models where topology is usually unknown at first, but gradually
it is discovered thanks to the local information provided by the
recruited members. I show preliminary results (simulations)
obtained by the analysis of algorithms based on different levels of
local information. (Joint work with Alonso Silva
Patricio Reyes holds a B.Sc. in Mathematics (2000) and a M.Sc. in Mathematical
Engineering (2003) from the University of Chile. He got his Ph.D. in
Computer Science (2009) working at
the French National Institute for Research in Computer Science and
Control, INRIA. He has worked as an associate researcher at CMM, the
Chilean Centre for Mathematical Modelling (research unit of CNRS,
France) and the Mine Planning Lab at the University of
Chile. At present, he is a postdoctoral visitor in the Department of
Statistics at Universidad Carlos III de Madrid since 2012.
His main research interests are: social network analysis; data-gathering
algorithms in networks; routing & scheduling in wireless mesh
networks; mine planning.
With the advent of Over-The-Top content providers
(OTTs), Internet Service Providers (ISPs) saw their portfolio of
services shrink to the low margin role of data transporters. In
order to counter this effect, some ISPs started to follow big OTTs
like Facebook and Google in trying to turn their data into a
valuable asset. In this paper, we explore the questions of what
meaningful information can be extracted from network data, and
what interesting insights it can provide. To this end, we tackle
the first challenge of detecting user-URLs, i.e., those links that
were clicked by users as opposed to those objects automatically
downloaded by browsers and applications. We devise algorithms
to pinpoint such URLs, and validate them on manually collected
ground truth traces. We then apply them on a three-day long
traffic trace spanning more than 19,000 residential users that
generated around 190 million HTTP transactions. We find that
only 1.6% of these observed URLs were actually clicked by users.
As a first application for our methods, we answer the question
of which platforms participate most in promoting the Internet
content. Surprisingly, we find that, despite its notoriety, only 11%
of the user URL visits are coming from Google Search.
While, on routers and gateways, buffers on forwarding devices are required to handle bursty Internet traffic, overly large or badly sized buffers can interact with TCP in undesirable ways. This phenomenon is well understood and is often called bufferbloat. Although a number of previous studies have shown that buffering (particularly, in home) can delay packets by as much as a few seconds in the worst case, there is less empirical evidence of tangible impacts on end-users. In this paper, we develop a modified algorithm that can detect bufferbloat at individual end-hosts based on passive observations of traffic. We then apply this algorithm on packet traces collected at 55 end- hosts, and across different network environments. Our results show that 45 out of the 55 users we study experience bufferbloat at least once, 40% of these users experience bufferbloat more than once per hour. In 90% of cases, buffering more than doubles RTTs, but RTTs during bufferbloat are rarely over one second. We also show that web and interactive applications, which are particularly sensitive to delay, are the applications most often affected by bufferbloat.
With the advent of Over-The-Top content providers
(OTTs), Internet Service Providers (ISPs) saw their portfolio of
services shrink to the low margin role of data transporters. In
order to counter this effect, some ISPs started to follow big OTTs
like Facebook and Google in trying to turn their data into a
valuable asset. In this paper, we explore the questions of what
meaningful information can be extracted from network data, and
what interesting insights it can provide. To this end, we tackle
the first challenge of detecting user-URLs, i.e., those links that
were clicked by users as opposed to those objects automatically
downloaded by browsers and applications. We devise algorithms
to pinpoint such URLs, and validate them on manually collected
ground truth traces. We then apply them on a three-day long
traffic trace spanning more than 19,000 residential users that
generated around 190 million HTTP transactions. We find that
only 1.6% of these observed URLs were actually clicked by users.
As a first application for our methods, we answer the question
of which platforms participate most in promoting the Internet
content. Surprisingly, we find that, despite its notoriety, only 11%
of the user URL visits are coming from Google Search.
Energy efficiency in mobile networks is gaining in importance from both environmental and business points of view. In particular, site sleep mode techniques are being introduced along with self organizing procedures that enable a dynamic activation/deactivation of sites without compromising Quality of Service (QoS). As the research is progressively broadening towards increased energy efficiency, green concepts are being introduced in standardization. For instance, 3GPP and ETSI are introducing energy efficiency enablers within their mobile networks standards. This talk gives an overview on the latest theoretical advances in optimal control for energy efficiency in mobile networks and the related standardization activities.
To help mitigate the critical stress on spectrum resources spurred by the more powerful and the more capable smart devices, a recent presidential advisory committee report and also a FCC report recommend the use of spectrum sharing technologies. One such technology addressed in these reports is
cognitive radio (CR), in which a network entity is able to adapt intelligently to the environment through
observation, exploration and learning.
In this talk, we discuss the problem of coexistence, competition and fairness among autonomous cognitive Radios (CRs) in multiple potentially available channels that may be non-homogeneous in terms of primary user (PU) occupancy. Moreover, the real spectrum occupancy data collected at RWTH Aachen confirms that the spectrum resources are in general non-homogeneous in terms of PU occupancy. We present a model in which a CR that is able to adapt to the environment through observation and exploration is limited in two ways. First, as in practical CR networks, CRs have imperfect observations (such as due to sensing and channel errors) of their environment. Second, CRs have imperfect memory due to limitations in computational capabilities. For efficient opportunistic channel selection, we discuss efficient strategies and utilize the framework of repeated games (with imperfect observations and memory) to analyze their stability in the presence of selfish deviations.
20140319
14h @
LINCS salle de conseil
Sohbi, Adel
(Telecom ParisTech)
Quantum Information
Information is something encoded in physical systems’s properties. Hence the study of information and computation is linked to the underlying physical processes. Quantum Information is the study of information encoded in the state of quantum system. Quantum phenomena such as the superposition principle and entanglement give rise to a other approach and opportunities in all the sub-fields of Information Theory. The goal of this seminar, by giving some basic concepts of quantum information, is to give a very naive approach which will permit to the audiance to have an overview of the Quantum Information field.
Fingerprinting networking equipment has many potential applications and benefits in network management and security. More generally, it is useful for the understanding of network structures and their behaviors. In this paper, we describe a simple fingerprinting mechanism based on the initial TTL values used by routers to reply to various probing messages. We show that main classes obtained using this simple mechanism are meaningful to distinguish routers platforms. Besides, it comes at a very low additional cost compared to standard
active topology discovery measurements. As a proof of concept, we apply our method to gain more insight on the behavior of MPLS routers and to, thus, more accurately quantify their visible/invisible deployment.
This work has been published in IMC 2013.
This presentation focuses on the resiliency of the French Internet, studied from the
point of view of network interconnectivity. We define a model for representing the
BGP-level topology of the Internet which takes into account the business
relationships between ASes, and implement an algorithm to construct such a map from
publicly available routing information.
The portion of the Internet responsible for the connectivity of French ASes is then
established, and the risk of disconnection is assessed. To this end, we identify a
set of critical ASes whose suppression would disconnect other ASes from the
Internet.
Finally, we give some insight as to how the model could be expanded and the
difficulties this would introduce. We also give an example of how a BGP-level map of
the Internet can be used to actively monitor the network.
TBD
Energy aware computing is as ubiquitous as ubiquitous computing itself. The user experience and up-time of hand-held devices are affected directly by the squander of battery resources. We will focus on embedded systems, e.g., smartphones, or tables, and highlight the most prominent sources of power consumption in application processors. The temperature dependency of the power consumption is also discussed. This implies the importance of thermal models for passively cooled devices, used by DVFS controllers, process schedulers, and thermal management units. Passively cooled devices show substantially different thermal behavior compared to actively cooled devices. Our latest developments on this topic are briefly presented. Furthermore, common energy optimization techniques, both software and hardware based, are discussed. The referenced material covers practical, industrial and academic solutions to the energy optimization problem.
An optical slot switching node network called POADM (packet optical add-drop multiplexers) has formerly been proposed as a flexible solution for metropolitan ring networks to carry data traffic with a sub-wavelength switching granularity and with a good energy efficiency, which is enabled
by optical transparency. In this paper, for the first time we propose several architectures for the electronic side of optical slot switching nodes to increase flexibility through the addition of electronic switches, working either at client packet granularity or at slot granularity; such electronic switches can be located
at either transmitter, receiver, or both sides of a node, thereby decreasing traffic latency, at the expense of increased node cost and/or energy consumption. This paper focuses on the latency aspect. We investigate the impact of a timer that can be used to upper bound the slots insertion time on the medium. We also propose, for the first time, a queuing model for optical slot switching ring and assess and compare the latency of these node architectures analytically using queuing theory, and with simulations.
Content-Centric Networking (CCN) is a promising framework for evolving the current network architecture,
advocating ubiquitous in-network caching to enhance content
delivery. Consequently, in CCN, each router has storage space
to cache frequently requested content. In this work, we focus
on the cache allocation problem: namely, how to distribute the
cache capacity across routers under a constrained total storage
budget for the network. We formulate this problem as a content
placement problem and obtain the exact optimal solution by a
two-step method. Through simulations, we use this algorithm to
investigate the factors that affect the optimal cache allocation
in CCN, such as the network topology and the popularity of
content. We find that a highly heterogeneous topology tends to
put most of the capacity over a few central nodes. On the other
hand, heterogeneous content popularity has the opposite effect, by
spreading capacity across far more nodes. Using our findings, we
make observations on how network operators could best deploy
CCN caches capacity
The subject of active queue management is again highly topical with the recent creation of a new IETF working group. Regained interest has arisen notably from the observed failings of traditional approaches to congestion control in environments as diverse as the data center interconnect and the home network subject to "bufferbloat". In this context it is opportune to remake the case for implementing per-flow fair queueing as the standard packet scheduling algorithm in router buffers. Though flow fairness as the basis of congestion control was proposed by Nagle as early as 1985 and feasibility was demonstrated at least 15 years ago in Bell Labs work on PacketStar, the networking research community has largely remained focused on designing new TCP versions and AQM algorithms as if the FIFO buffer were an unavoidable technological constraint. We show how per-flow fairness realizes implicit service differentiation and greatly facilitates network engineering with the notion of "fair networks" appearing as a natural parallel to that of "loss networks". Accounting for the stochastic nature of traffic, simple fairness is generally seen to be preferable to weighted fairness or size-based priority scheduling while longest queue drop is likely the only AQM required.
Jim Roberts very recently joined the French research institute IRT-SystemX to work on a project on Cloud computing and network architecture. He was previously with Inria from September 2009 after spending more than thirty years with France Telecom research labs. He received a degree in Mathematics from the University of Surrey in 1970 and a doctorate in computer science in 1987 from the University of Paris VI. His research is centered on the performance evaluation and design of traffic controls for communication networks. In a long career, he has published around 100 papers, chaired several program committees and been associate editor for a number of journals. He gave the Keynote at Infocom 2013. He is a Fellow of the Societe des Electriciens et Electroniciens (SEE) and recipient of the Arne Jensen lifetime achievement award from the International Teletraffic Congress (ITC).
Those who design, develop and deploy computer and networked systems, have a vital interest in how these systems perform in "real-world" scenarios. But real-world conditions and data sets are often hard to come by-companies treat scenario data as a confidential asset and public institutions are reluctant to release data for fear of compromising individual privacy. This challenge is particularly acute in mobile wireless networks, where many have noted the need for realistic mobility and wireless network datasets. But assuring privacy is difficult. Several well-known examples have shown how anonymized data sets can be combined with other data to compromise individuals’ personal privacy. The relatively recent model of differential privacy (DP) provides an alternate approach to measuring and controlling the disclosure of personal information, adding sufficient random "noise" (in a precisely quantifiable manner) to any output computed from a sensitive collection of data, so that a precise statistical privacy condition is met.
In this talk we outline ongoing research to produce trajectory traces, and results derived from trajectory traces, for public release from original "real-world" mobility traces (e.g., from our 802.11 campus network) while providing both well-defined differential privacy guarantees and demonstrably high accuracy when these publicly-released data sets are used for a number of common network and protocol design and analysis tasks. We describe a DP technique using a constrained trajectory-prefix representationof the original data, using known network topology and human mobility constraints, to determinethe underlying representation of the original data and judiciously allocate random noise needed tosatisfy DP constraints. We will also discuss alternative representations of mobility data that we conjecture will provide better accuracy for specific analysis tasks, and discuss the tradeoff between generality/specificity and accuracy.
This is a "work-in-progress" talk, so ideas are still being "baked" and comments/discussion are particularly welcome. This is joint research with Gerome Miklau and Jennie Steshenko at the University of Massachusetts Amherst
Jim Kurose is a Distinguished Professor of Computer Science at the University of Massachusetts Amherst. His research interests include network protocols and architecture, network measurement, sensor networks, multimedia communication, and modeling and performance evaluation. He has served as Editor-in-Chief of the IEEE Transactions on Communications and was the founding Editor-in-Chief of the IEEE/ACM Transactions on Networking. He has been active in the program committees forIEEE Infocom, ACM SIGCOMM, ACM SIGMETRICS and ACM Internet Measurement conferences for a number of years, and has served as Technical Program Co-Chair for these conferences.He has received a number of research and teaching awards including the IEEE Infocom Award, the ACM Sigcomm Test of Time Award and the IEEE Taylor Booth Education Medal. With Keith Ross, he is the co-author of the textbook, Computer Networking, a top down approach (6th edition).He has been a visiting researcher at Technicolor’s Paris Research Lab and at the LINCS (where is also a member of the LINCS Scientific Advisory Board) in 2012.
Named Data Networking (NDN) is an emerging In-
formation Centric Networking architecture based on hierarchical
content names, in-network caching mechanisms, receiver-driven
operations, and content-level security schema. NDN networking
primitives and routing are based on content names and therefore
efficient content discovery of permanent as well as temporarily
available cached copies is a key problem to address. This paper ex-
amines current NDN approaches and proposes a fully distributed,
content-driven, bloom filter-based intra-domain routing algorithm
(COBRA), which outperforms previous solutions in this area.
COBRA creates routes based on paths used previously for content
retrieval, and maintains routing information up-to-date without
the need for extensive signaling between nodes. We evaluate
COBRA using simulation and compare its performance with other
established routing strategies over the European research network
GEANT topology as an example of a ndnSIM core network. Our
results illustrate that COBRA can significantly reduce overhead
with respect to flood-based routing while ensuring hit distances
of the same order as when using Dijkstra’s algorithm.
joint work with L.A. Grieco, G. Boggia, and K. Pentikousis, appering in the Procedings of IEEE Consumer Communications & Networking Conf. (CCNC), 2014
Community detection consists in identification of groups of similar
items within a population. In the context of online social networks, it is
a useful primitive for recommending either contacts or news items to
users. We will consider a particular generative probabilistic model for
the observations, namely the so-called stochastic block model, and
generalizations thereof. We will describe spectral transformations and
associated clustering schemes for partitioning objects into distinct
groups. Exploiting results on the spectrum of random graphs, we will
establish consistency of these approaches under suitable assumptions,
namely presence of a sufficiently strong signal in the observed data. We
will also discuss open questions on phase transitions for cluster
detectability in such models when the signal becomes weak. In particular
we will introduce a novel spectral method which provably allows detection
of communities down to a critical threshold, thereby settling an open
conjecture of Decelle, Krzakala, Moore and Zdeborovic.
Brigitte CARDINAEL
Head of Cooperative Research
France Telecom Orange
Brigitte Cardinael was graduated at INT in 1987. She joined Matra Communication in 1987,
where she was involved during 6 years in the research and development for GSM. Then she
joined France Telecom Research and Development (FTRD) in 1993 where she worked first
on mobile satellite component in context of IMT-2000. After France Telecom investment in
Globalstar she was the coordinator of Globalstar activities at FTR&D from 1995 to 1999. She
joined in 2002 the management team of the R&D Division on Mobile Services and Radio
Systems as Innovation and Strategy manager. She was also in charge from 2001 till 2005 of coordinating Beyond 3G activities which in
cludes France Telecom involvement in FP6 (Winner, E2R, Ambient Network, Daidalos, SPICE, MAGNET and 4MORE) and in international bodies (WWRF, SDRF, IEEE, 3GPP, IETF,...). She is since January 2006 head
of cooperative research at France Telecom R&D. She has been eMobility Vice Chairman in 2006 and 2007. She is chairing the ETNO R&D group and the eMobility Testing Facilities WG.
Far and away the most energetic driver of modern Internet attacks is the
ability of attackers to financially profit from their assaults. Many of
these undertakings however require attackers to operate at a scale that
necessitates interacting with unknown parties - rendering their activities
vulnerable to *infiltration* by defenders. This talk will sketch research
that has leveraged such infiltration to striking effect.
Real-time communication over the Internet is of ever
increasing importance due the diffusion of portable device
s, such
as smart phones or tablets, with enough processing capacity
to
support video conferencing applications. The RTCWeb worki
ng
group has been established with the goal of standardizing a s
et
of protocols for inter-operable real-time communication a
mong
Web browsers. In this paper we focus on the Google Congestion
Control (GCC), recently proposed in such WG, which is based
on a loss-based algorithm run at the sender and a delay-
based algorithm executed at the receiver. In a recent work we
have shown that a TCP flow can starve a GCC flow. In this
work we show that this issue is due to a threshold mechanism
employed by the delay-based controller. By carrying out an
extensive experimental evaluation in a controlled testbed
, we
have found that, when the threshold is small, the delay-base
d
algorithm prevails over the loss-based algorithm, which co
ntains
queuing delays and losses. However, a small threshold may le
ad
to starvation of the GCC flow when sharing the bottleneck with
a loss-based TCP flow. This is a joint with G. Carlucci and S. Mascolo, and will be presented at Packet Video Workshop, San Jose, CA, USA, December 2013
Neuro-Dynamic Programming encompasses techniques from both reinforcement learning and approximate dynamic programming. Feature selection refers to the choice of basis that defines the function class that is required in the application of these techniques. This talk reviews two popular approaches to neuro-dynamic programming, TD-learning and Q-learning. The main goal of this work is to demonstrate how insight from idealized models can be used as a guide for feature selection for these algorithms. Several approaches are surveyed, including fluid and diffusion models, and the application of idealized models arising from mean-field game approximations. The theory is illustrated with several examples. This talk based on (i) D. Huang, W. Chen, P. Mehta, S. Meyn, and A. Surana. Feature selection for neuro-dynamic programming. In F. Lewis, editor, Reinforcement Learning and Approximate Dynamic Programming for Feedback Control. Wiley, 2011 and (ii) S. P. Meyn. Control Techniques for Complex Networks. Cambridge University Press, Cambridge, 2007.
Sean Meyn received the B.A. degree in mathematics from the University of California, Los Angeles (UCLA), in 1982 and the Ph.D. degree in electrical engineering from McGill University, Canada, in 1987 (with Prof. P. Caines, McGill University). He is now Professor and Robert C. Pittman Eminent Scholar Chair in the Department of Electrical and Computer Engineering at the University of Florida, the director of the Laboratory for Cognition & Control, and director of the Florida Institute for Sustainable Energy. His academic research interests include theory and applications of decision and control, stochastic processes, and optimization. He has received many awards for his research on these topics, and is a fellow of the IEEE. He has held visiting positions at universities all over the world, including the Indian Institute of Science, Bangalore during 1997-1998 where he was a Fulbright Research Scholar. During his latest sabbatical during the 2006-2007 academic year he was a visiting professor at MIT and United Technologies Research Center (UTRC). His award-winning 1993 monograph with Richard Tweedie, Markov Chains and Stochastic Stability, has been cited thousands of times in journals from a range of fields. For the past ten years his applied research has focused on engineering, markets, and policy in energy systems. He regularly engages in industry, government, and academic panels on these topics, and hosts an annual workshop at the University of Florida.
The years following the United Nation’s World Summit on the Information Society (WSIS) have seen much technological change related to the Internet as well as significant innovation related to the discussion of Internet policy issues. Reporting on an institutional innovation in the Internet governance ecosystem, the research to be discussed examines the approximately eight-year-old Internet Governance Forum, highlights the growing roles of civil society including academics and technical experts, and tracks especially recent tensions among ecosystem actors. It also highlights the ecosystem’s evolution and the roles of Multistakeholderism. Finally, it analyzes recent developments and identifies future directions.
Nanette S. Levinson is Associate Professor of International Relations, School of International Service, American University and Academic Director of the SIS- Sciences-Po Exchange. She is a past Chair of the Global Internet Governance Academic Network (GigaNet) and Editor of the International Communication Section in Robert Denemark, Editor,The International Studies Compendium Project. Oxford: Wiley-Blackwell. From 1988-2005 she served as Associate Dean of the School of International Service.
Recipient of awards including those for outstanding teaching, program development, academic affairs administration, multicultural affairs and honors programming, she has designed co-curricular collaborative learning opportunities including the Freshman Service Experience and the Graduate Portal Program. Additionally, she has crafted and implemented research-based training programs for the private and public sectors. In 2011, the Ashoka Foundation presented her with an "Award for Outstanding Contributions to Social Entrepreneurship Education" and included her peer-reviewed syllabus in its list and publication of the top ten syllabi in the field.
Her research and teaching focus on knowledge transfer, culture, and innovation in a range of settings including cross-national alliances; internet and global governance; cross-national, virtual collaboration; and social entrepreneurship. Also included is work centering on interorganizational learning and institutional change with special focus on new media and technology policy issues in the developing world. Prof. Levinson’s writings appear online and in journals ranging from Information Technologies and International Development to International Studies Perspectives. She received her bachelor’s, masters and doctoral degrees from Harvard University.
There is lot of interest and concern, both in research and industry, about the potential for correlating user accounts across multiple online social networking sites. In this paper, we focus on the challenge of designing account correlation schemes that achieve high reliability, i.e., low error rates, in matching accounts, even when applied in large-scale networks with hundreds of millions of user accounts. We begin by identifying four important properties –Availability, Consistency, non-Impersonability, and Discriminability (ACID)– that features used for matching accounts need to satisfy in order to achieve reliable and scalable account correlation. Even though public attributes like name, location, profile photo, and friends do not satisfy all the ACID properties, we show how it is possible to leverage multiple attributes to build SCALABLE- LINKER, a reliable and scalable account correlator. We evaluate the performance of SCALABLE-LINKER in correlating accounts from Twitter and Facebook, two of the largest real-world social networks. Our tests using ground truth data about correlated accounts, show that while SCALABLE-LINKER can correlate as high as 89% of accounts (true positive rate) with less than 1% false positive rate, when evaluated over small thousand node subsets of Facebook accounts, the true positive rate drops to 21% (keeping the 1% false positive rate), when the evaluation scale to include all the more than billion Facebook accounts. Our findings reflect the potential as well as the limits of reliably correlating accounts at scale using only public attributes of accounts.
Les detecteurs de defaillances ont ete introduit par Chandra et Toueg en 1996 et ont ete l’objet d’une recherche active depuis cette date.
On presentera les detecteurs de defaillances, les notions de reduction et
de plus faible detecteur de defaillances pour resoudre un probleme en presence de panne. On fera ensuite un etat de l’art sur les resultats obtenus ainsi que sur les implementations des detecteurs de defaillances.
We consider the problem of controlling traffic lights in an urban environment composed of multiple adjacent intersections by using an intelligent transportation system to reduce congestion and delays. Traditionally, each intersection is managed statically: the order and durations of the green lights are pre-determined and do not adapt dynamically to the traffic conditions. Detectors are sometimes used to count vehicles on each lane of an intersection but the data they report is generally used only to select between a few static sequences and timings setups. Here, we detail and study TAPIOCA, a distribuTed and AdaPtIve intersectiOns Control Algorithm that decides of a traffic light schedule. After a review of relevant related works, we first expose and evaluate the TAPIOCA algorithm, using the SUMO simulator and the TAPASCologne dataset. We then study the use of a hierarchical wireless sensor network deployed at intersections and the consequences of losses and delays it induces on TAPIOCA. Last but not least, we propose a prediction mechanism that alleviates these issues and show, using co-simulation between SUMO and OMNeT++, that such interpolation mechanisms are effectively able to replace missing or outdated data.
Sébastien Faye obtained a master degree in computer science from the university of Picardie Jules Verne (Amiens, France) in 2011. He is currently a PhD student at the Computer Science and Networking Department (INFRES) of Telecom ParisTech (Paris, France). His research interests include Intelligent Transportation Systems and sensor networks.
Processor sharing models occur in a wide variety of situations. They are good models for
bandwidth sharing as well as being solutions to NUM for logarithmic utilities. In addition they
possess the desirable stochastic property of the stationary distribution being insensitive to the service time distribution. In this talk I will discuss new advances in understanding and characterizing
the behavior of randomized routing to PS servers that are heterogeneous in terms of their server
capacities.
In particular, starting with the identical server case we will rst discuss the so-called Power-of-two rule where by a combination of routing to the least occupied server amongst two randomly
chosen servers results in a very low server occupancy and a so-called propagation of chaos or
asymptotic independence. Using these insights we analyze the case of heterogeneous servers where
the server capacity can be one of M. We provide a complete characterization of the stationary
distribution and prove that the limiting system is insensitive. We then consider a modied criterion
based on routing to the server with lower Lagrange costs. We compare these dynamic routing
strategies with an optimal static state independent scheme. We show that the dynamic schemes
are much better in terms of average delay with the Lagrange cost based being the best.
The techniques are based on a mean eld analysis and an ansatz based on propagation of chaos.
Joint work with Arpan Mukhopadhyay (Waterloo).
The speaker was educated at the Indian Institute of Technology, Bombay (B.Tech, 1977),
Imperial College, London (MSc, DIC, 1978) and obtained his PhD under A. V. Balakrishnan at UCLA in
1983.
He is currently a University Research Chair Professor in the Dept. of ECE at the University of Waterloo,
Ont., Canada where he has been since September 2004. Prior to this he was Professor of ECE at Purdue
University, West Lafayette, USA. He is a Fellow of the IEEE and the Royal Statistical Society. He is
a recipient of the INFOCOM 2006 Best Paper Award and was runner-up for the Best Paper Award at
INFOCOM 1998.
His research interests are in modeling, control, and performance analysis of both wireline and wireless
networks, and in applied probability and stochastic analysis with applications to queueing, ltering, and
optimization.
Locator/Identifier splitting is a paradigm proposed to help Internet scalability. At the same time, the separation of the locator and the identifier name spaces allows for a more flexible management of traffic engineering, mobility and multi.homing. A mapping system is require for the binding of locators and identifiers. This mapping is helped by the use of a cache where ongoing associations are kept. The management and dimensioning of this cache is a critical aspect of this new architecture for its future deployment. We present a model for the caching system and and evaluation of its behaviour using real traffic traces.
The talk will present the recent research work on LOC/ID splitting architectures.
As network grow larger and larger, they become more likely to fail locally. Indeed, the nodes may be subject to attacks, failures, memory corruption... In order to encompass all possible types of failures, we consider the more general model of failure: the Byzantine model, where the failing nodes have an arbitrary malicious behavior. In other words, tolerating Byzantine nodes implies to ensure that there exists no strategy (however unlikely it may be) for the Byzantine nodes to destabilize the network.
We thus consider the probleme of reliably broadcasting a message in a multihop network that is subject to Byzantine failures. Solutions exist, but require a highly-connected network. In this talk, we present our recent solutions for Byzantine-resilient broadcast in sparse networks, where each node has a very limited number of neighbors. A typical example is the grid, where each node has at most 4 neighbors. We thus show the tradeo-ff between connectivity and reliability.
We challenge a set of common assumptions that are frequently used to
model interdomain routing in the Internet. We draw assumptions from the
scientific literature and confront them with routing decisions that are
actually taken by ASes, as seen in BGP feeds. We show that the assumptions are
too simple to model real-world Internet routing policies. We also show that
ASes frequently route in ways that are inconsistent with simple economic
models of AS relationships. Our results should introduce a note of caution
into future work that makes these assumptions and should prompt attempts to
find more accurate models.
Churn rate is the percentage rate at which customers discontinue using a service. Every service provider has a churn prediction model where the objective is to minimize the churn rate by giving discounts or offers to customers susceptible to churn. In the following work, we present a preliminary model based on game theory and heuristics which perform well in an multi-period stage to maximize the revenue of the service providers under different scenarios.
We present the first empirical study of home network avail- ability, infrastructure, and usage, using data collected from home networks around the world. In each home, we de- ploy a router with custom firmware to collect information about the availability of home broadband network connec- tivity, the home network infrastructure (including the wire- less connectivity in each home network and the number of devices connected to the network), and how people in each home network use the network. Outages are more frequent and longer in developing countries sometimes due to the network, and in other cases because they simply turn their home router off. We also find that some portions of the wire- less spectrum are extremely crowded, that diurnal patterns are more pronounced during the week, and that most traf- fic in home networks is exchanged over a few connections to a small number of domains. Our study is both a prelim- inary view into many home networks and an illustration of how measurements from a home router can yield significant information about home networks. This is joint work with Mi Seon Park, Srikanth Sundaresan, Sam Burnett, Hyojoon Kim, Nick Feamster. It will appear at IMC 2013.
This talk considers antenna selection (AS) at a receiver equipped with multiple antenna elements but only a single radio frequency chain for packet reception. As information about the channel state is acquired using training symbols (pilots), the receiver makes its AS decisions based on noisy channel estimates. Additional information that can be exploited for AS includes the time-correlation of the wireless channel and the results of the link-layer error checks upon receiving the data packets. In this scenario, the task of the receiver is to sequentially select (a) the pilot symbol allocation, i.e., how to distribute the available pilot symbols among the antenna elements, for channel estimation on each of the receive antennas; and (b) the antenna to be used for data packet reception. The goal is to maximize the expected throughput, based on the past history of allocation and selection decisions, and the corresponding noisy channel estimates and error check results. Since the channel state is only partially observed through the noisy pilots and the error checks, the joint problem of pilot allocation and AS is modeled as a partially observed Markov decision process (POMDP). The solution to the POMDP yields the policy that maximizes the long-term expected throughput. Using the Finite State Markov Chain (FSMC) model for the wireless channel, the performance of the POMDP solution is compared with that of other existing schemes, and it is illustrated through numerical evaluation that the POMDP solution significantly outperforms them.
The Locator/ID Separation Protocol (LISP), proposed by Cisco and currently under standardization at the IETF (Internet Engineering Task Force), is an instantiation of the paradigm separating locators and identifiers. LISP improves Internet\u2019s scalability, also providing additional benefits (e.g., support for multi- homing, traffic engineering, mobility, etc.) and having good incremental deployability properties. The talk will overview the principles of LISP, its use in context different from Internet scalability, and some of the ongoing work in Telecom ParisTech.
Large-scale distributed traceroute-based measurement systems are used to obtain the topology of the Internet at the IP-level and can be used to monitor and understand the behavior of the network. However, existing approaches to measuring the public IPv4 network space often require several days to obtain a full graph, which is too slow to capture much of the network’s dynamics. This paper presents a new network topology capture algorithm, NTC, which aims to better capture network dynamics through accelerated probing, reducing the probing load while maintaining good coverage. There are two novel aspects to our approach: it focuses on obtaining the network graph rather than a full set of individual traces, and it uses past probing results in a new, adaptive, way to guide future probing. We study the performance of our algorithm on real traces and demonstrate outstanding improved performance compared to existing work. More info http://ntc.top-hat.info/index.html
This paper addresses monitoring and surveillance applications using Wireless
Sensors Networks (WSN). In this context, several remote clients are interested
in receiving the information collected by the nodes in a WSN\@.
As WSN devices are most of the time constrained in energy and processing,
we present a caching architecture that will help reducing unnecessary
communications and adapting the network to application needs.
Our aim here is to cache information in order to improve the overall network lifetime,
while meeting requirements of external application in terms information freshness.
We first describe and evaluate the performance of our caching system within the
framework of a Constrained Application Protocol (CoAP) proxy.
We then extend this work by showing how the cache could be enriched and exploited with
cross-layer data.
Based on information from routing packets and estimation updates of nodes power consumption,
we derive an optimization strategy which allows to meet requirements on the freshness of
the cached values.
In this talk, I address the problem of the validity of weighted automata in which the presence of epsilon-circuits results in infinite summations. Earlier works either rule out such automata or characterise the semirings in which these infinite sums are all well-defined.
By means of a topological approach, we take here a definition of validity that is strong enough to insure that in any kind of semirings, any closure algorithm will succeed on every valid weighted automaton and turn it into an equivalent proper automaton. This definition is stable with respect to natural transformations of automata.
The classical closure algorithms, in particular algorithms based on the computation of the star of the matrix of epsilon-transitions, can not be used to decide validity. This decision problem remains open for general topological semirings.
We present a closure algorithm that yields a decision procedure for the validity of automata in the case where the weights are taken in Q or R. This case had never been treated before and we wanted to include in the Vaucanson platform.
Joint work with Sylvain Lombardy (Universite de Bordeaux).
Network coding has been shown to be an effective technique to increase throughput and delay performance in a wide range of network scenarios. This talk focuses on the application of network coding to wireless broadcast. An overview of existing coding methods will be provided. Among others, random linear network code (RLNC) and instantly decodable network code (IDNC) are two popular choices. RLNC is throughput optimal but suffers from slow decoding, whereas IDNC has fast decoding but is suboptimal in throughput especially when the number of users is large. In between these two extremes is our recently proposed method called sparse network coding (SNC). This talk will present the design of this coding algorithm and some related computational complexity questions. Performance comparison with RLNC and IDNC will also be made.
Chi Wan (Albert) Sung received his B.Eng, M.Phil and Ph.D in Information Engineering from the Chinese University of Hong Kong in 1993, 1995, and 1998, respectively. He worked in the Chinese University of Hong Kong as an Assistant Professor from 1998 to 1999. He joined City University of Hong Kong in 2000, and is now an Associate Professor of the Department of Electronic Engineering.
He is an Adjunct Associate Research Professor in University of South Australia, and is on the editorial board of the ETRI journal and of the Transactions on Emerging Telecommunications Technologies (ETT).
His research interests include wireless communications, network coding, information theory, and algorithms and complexity.
We study a game-theoretic model for security/availability in a networking
context. To perform some desired task, a defender needs to choose a subset
from a set of resources. To perturb the task, an attacker picks a resource to attack.
We model this scenario as a 2-player game and are
interested in describing its set of Nash equilibria.
The games we study have a particular structure, for which we can use
the theory of blocking pairs of polyhedra, pioneered by Fulkerson,
to arrive a reasonably satisfactory understanding of the Nash equilibria.
The subsets of resources that support Nash equilibrium strategies of the attacker,
called "vulnerability sets", are of particular interest, and we identify them in several
specific games of this type. An example of a game of this sort is when the set of
resources is the set of edges of a connected graph,
the defender chooses as its subset the edges of a spanning tree,
and the attacker chooses an edge to attack with the aim of
breaking the spanning tree. (joint work with Assane Gueye, Aron Laszka, and Jean Walrand)
In the presentation, we will discuss exact integer programming models for transmission scheduling in wireless networks based on the notion of compatible set.
A compatible set is defined as a subset of radio links that can transmit simultaneously with acceptable interference. The issue is to find a set of compatible sets that, when properly interlaced in the transmission slots, will maximize a traffic objective. We will present integer programming formulations of the underlying optimization problem and discuss their computational effectiveness.
Michal Pioro is a professor and Head of the Computer Networks and Switching Division at the Institute of Telecommunications, Warsaw University of Technology, Poland. At the same time he is a professor at Lund University, Sweden. He received a Ph.D. degree in telecommunications in 1979, and a D.Sc. degree (habilitation) in 1990, both from the Warsaw University of Technology. In 2002 he received a Polish State Professorship. His research interests concentrate on modeling, optimization and performance evaluation of telecommunication networks and systems. He is an author of four books and more than 150 technical papers presented in the telecommunication journals and conference proceedings. He has led many research projects for telecom industry in the field of network modeling, design, and performance analysis. He is deeply involved in international research projects including the FP7, Celtic and COST projects.
It is well-known that most problems in distributed computing cannot be solved in a wait-free manner, i.e., ensuring that processes are able to make progress independently of each other. The failure-detector abstraction was proposed to circumvent these impossibilities. Intuitively, a failure detector provides each process with some (possibly incomplete and inaccurate) information about the current failure pattern, which allows the processes to wait for each other in order to compute a consistent output.
We motivate and propose a new way of thinking about failure detectors which allows us to define, quite surprisingly, what it means to solve a distributed task wait-free using a failure detector. We separate computation processes that obtain inputs and are supposed to produce outputs from synchronization processes that are subject to failures and can query a failure detector. In our framework, we obtain a complete classification of tasks, including ones that evaded comprehensible characterization so far, such as renaming or weak symmetry breaking. More info http://www.mefosyloma.fr/j2013-06-07.html
Join-Free Petri nets, whose transitions have at most one input place, model systems without synchronizations while Choice-Free Petri nets, whose places have at most one output transition, model systems without conflicts. These classes respectively encompass the state machines (or S-systems) and the marked graphs (or T-systems).
Whereas a structurally bounded and structurally live Petri net graph is said to be "well-formed", a bounded and live Petri net is said to be "well-behaved". Necessary and sufficient conditions for the well-formedness of Join-Free and Choice-Free nets have been known for some time, yet the behavioral properties of these classes are still not well understood. In particular efficient sufficient conditions for liveness have not been found until now.
We extend results on weighted T-systems to the class of weighted Petri nets and present transformations which preserve the feasible sequences of transitions and reduce the initial marking. We introduce a notion of "balancing" that makes possible the transformation of conservative systems into so-called "1-conservative systems" while retaining the feasible transition sequences. This transformation leads to polynomial sufficient conditions of liveness for well-formed Join-Free and Choice-Free nets. More info http://www.mefosyloma.fr/j2013-06-07.html
Consensus in a network is a fundamental, thoroughly, studied problem. It is well known that the main obstacles to it are unreliability and asynchronism as stated by the celebrated "FLP impossibility result". However under smoother assumptions, several works in the past decade have shown that consensus can be reached using very simple algorithms known as Gossip. Interestingly enough, these gossip algorithms turned out essential in the more complex setting of distributed optimization where the sought consensus has to solve an optimization problem.
The first part of this talk will introduce the main concepts and tools: distributed algorithms, gossip, distributed optimization. In a second part of the talk, I will discuss how to rely on distributed optimization to build more robust gossip schemes.
A careful perusal of the Internet evolution reveals two major trends - explosion of cloud-based services and video streaming applications. In both of the above cases, the owner (e.g., CNN, YouTube, or Zynga) of the content and the organization serving it (e.g., Akamai, Limelight, or Amazon EC2) are decoupled, thus making it harder to understand the association between the content, owner, and the host where the content resides. This has created a tangled world wide web that is very hard to unwind. In this picture, ISPs and network administrators are losing the control of their network while struggling to find new mechanisms to increase revenues.
In this talk, I’ll present some measurement to show the tangle, showing some data about internet traffic, and some side-notes about user privacy.
I’ll then present Dn-Hunter, a system that leverages the information provided by DNS traffic to discern it. Parsing through DNS queries, traffic flows are tagged with the associated domain name. This association reveals a large amount of useful information to automatically discover (i) what services run on a layer-4 port or server, (ii) which content is accessed via TLS encryption, (iii) what content/service does a given CDN or cloud provider handle, and (iv) how a particular CDN or Cloud serves users’ requests.
Simply put, the information provided by DNS traffic is one of the key components required to unveil the tangled web, and to restore network and application visibility to the network administrators.
Marco Mellia graduated from the Politecnico di Torino with Ph.D. in Electronic and Telecommunication Engineering in 2001. Between February and October 1997, he was a Researcher supported by CSELT. He was a Visiting PhD Student starting from february 1999 to november 1999 at the Computer Science Department of theCarnegie Mellon University, where he worked with Prof. Hui Zhang and Ion Stoica. From February to March 2002 he visited the Sprint Advanced Technology Laboratories Burlingame, California, working at theIP Monitoring Project (IPMON). During the summer 2011 ans 2012 he visisted Narus Inc, Sunnyvale, California, where he worked on traffic classification problems.
He has co-authored over 180 papers published in international journals and presented in leading international conferences, all of them in the area of telecommunication networks. He participated in the program committees of several conferences including ACM SIGCOMM, ACM CoNEXT, IEEE Infocom, IEEE Globecom and IEEE ICC.
His research interest are in the design and investigation of energy efficient networks (green networks) and in the traffic monitoring and analysis. He is currently the coordinator of the mPlane Integrated Project that focusses on buling an Intelligent Measurement Plane for Future Network and Application Management
Currently he’s working at the Dipartimento di Elettronica e Telecomunicazioni at Politecnico di Torino as an Assistant Professor
Survivability in IP-over-WDM networks has already
been extensively discussed in a series of studies. Up to date,
most of the studies assume single-hop working routing of traffic
requests. In this paper, we study the multi layer survivable design
of a virtual topology in the context of multiple-hop working
routing for IP layer traffic requests. The design problem is
composed of two problems which are simultaneously solved:
( i ) Finding the most efficient or economical multi-hop routing
of the IP traffic flows with different bandwidth granularities
over the virtual topology, which involves some traffic grooming,
( ii ) Ensuring that the virtual topology is survivable throughout
an appropriate mapping of the virtual links over the physical
topology, if such a mapping exists.
In order to solve such a complex multi layer resilient network
design problem, we propose a column generation ILP model. It
allows exploiting the natural decomposition of the problem and
helps devising a scalable solution scheme.
We conducted numerical experiments on a German network
with 50 nodes and 88 physical links. Not only we could solve much
larger data instances than those published in the literature, but
also observe than multi-hop routing allows a saving of up to 10%
of the number of lightpaths, depending on the traffic load.
Simulation is the research tool of choice for a majority of the mobile ad hoc network (MANET)
community; however, while the use of simulation has increased, the credibility of the simulation results
has decreased. Since mobility patterns can significantly affect the performance of a protocol,
choosing a realistic mobility model is one important aspect to consider in the development of a credible
MANET simulation scenario.
In addition to being realistic, a mobility model should be easy to understand and use. Unfortunately, most
of the simple mobility models proposed thus far are not realistic and most of the realistic mobility models
proposed thus far are not simple to use.
In this seminar, I will present SMOOTH, a new mobility model that is realistic (e.g., SMOOTH is
based on several known features of human movement) and is simple to use (e.g., SMOOTH
does not have any complex input parameters). In addition to presenting SMOOTH, I will show results
that validate SMOOTH imitates human movement patterns present in real mobility traces collected
from a range of diverse scenarios and I will compare SMOOTH with other mobility models that have been
developed on similar mobility traces. Lastly, I will discuss tools that my group has created to aid the
development of more rigorous simulation studies. While this work focuses on the MANET field, the
takeaway message in regards to credible simulation is applicable to other computing fields.
Tracy Camp is a Full Professor of Computer Science in the Department of Electrical Engineering and Computer Science at the Colorado School of Mines. She is the Founder and Director of the Toilers (http://toilers.mines.edu), an active ad hoc networks research group. Her current research interests include the credibility of ad hoc network simulation studies and the use of wireless sensor networks in geosystems. Dr. Camp has received over 20 grants from the National Science Foundation, including a prestigious NSF CAREER award. In total, her projects have received over $20 million dollars in external funding. This funding has produced 12 software packages that have been requested from (and shared with) more than 3000 researchers in 86 countries (as of October 2012). Dr. Camp has published over 80 refereed articles and 12 invited articles, and these articles have been cited almost 4,000 times (per Microsoft Academic Search) and over 7,000 times (per Google Scholar) as of December 2012.
Dr. Camp is an ACM Fellow, an ACM Distinguished Lecturer, and an IEEE Senior Member. She has enjoyed being a Fulbright Scholar in New Zealand (in 2006), a Distinguished Visitor at the University of Bonn in Germany (in 2010), and a keynote presenter at several venues, e.g., at the 7th International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP 2011) in Adelaide, Australia, and the 3rd International Conference on Simulation Tools and Techniques (SIMUTools 2010) in Malaga, Spain. In December 2007, Dr. Camp received the Board of Trustees Outstanding Faculty Award at the Colorado School of Mines; this award was only given five times between 1998-2007. She shares her life with Max (born in 2000), Emma (born in 2003), her husband (Glen), and three pets (two cats and a dog). The four humans are vegetarians who tremendously enjoy living in the foothills of the Rockies.
Indoor localization has attracted much attentionrecently due to its potential for realizing indoor location-awareapplication services. This talk considers a time-criticalscenario with a team of soldiers or first responders conductingemergency mission operations in a large building in whichinfrastructure-based localization is not feasible (e.g., due tomanagement/installation costs, power outage, terrorist attacks). In this talk, I will present a collaborative indoor positioning scheme called CLIPS that requires no preexisting indoorinfrastructure. Assuming that each user has a received signalstrength map for the area in reference, an application can compare and select a set of feasible positions,when the device receives actual signal strength values at runtime. Then, dead reckoning is performed to remove invalidcandidate coordinates eventually leaving only the correct onewhich can be shared amongst the team. The evaluation resultsfrom an Android-based testbed show that CLIPS converges toan accurate set of coordinates much faster than existing non-collaborativeschemes (more than 50% improvement under theconsidered scenarios).
Dr. Uichin Lee is an assistant professor in the Department of Knowledge Service Engineering at KAIST, Korea. He received his Ph.D. degree in Computer Science from UCLA in 2008. Before joining KAIST, he worked for Alcatel-Lucent Bell Labs at Holmdel as a member of technical staff. His research interests include mobile/pervasive computing and social computing systems.
In this talk we survey control issues in the grid, and how the introduction of renewables brings new and interesting control problems. We also explain the need for economic theory to guide the formulation of contracts for resources needed for reliable real-time control.
Control of the grid takes place on many time-scales, and is analogous to many other control problems, such as confronted in aviation. There is decision making on times scales of days, weeks, or months; much like the planning that takes place for ticket sales for a commercial airline. Hourly decision making of energy supply is analogous to the chatter between pilot and air traffic controller to re-adjust a route in response to an approaching thunderstorm. Then, there is regulation of the grid on time-scales of seconds to minutes; consider the second-by-second movement of the ailerons on the wings of an airplane, in response to disturbances from wind and rain hitting the moving plane. There are also transient control problems: The recovery of the grid following one generator outage is much like the take-off or landing of an airplane.
It is important to keep these analogies in mind so that we can have an informed discussion about how to manage the volatility introduced to the grid through renewable energy sources such as wind and solar. The new control problems in the grid will be solved by engineers, as we have solved many similar control problems.
Sean Meyn received the B.A. degree in mathematics from the University of California, Los Angeles (UCLA), in 1982 and the Ph.D. degree in electrical engineering from McGill University, Canada, in 1987 (with Prof. P. Caines, McGill University). He is now Professor and Robert C. Pittman Eminent Scholar Chair in the Department of Electrical and Computer Engineering at the University of Florida, and director of the Laboratory for Cognition & Control. His research interests include stochastic processes, optimization, and information theory, with applications to power systems.
I will present an approach that supports the idea of ubiquitous computing through finger mounted motion sensors. We are motivated to enable users for interacting with any grasped device through one generic wearable interface. Two prototypes illustrate that idea: Tickle (TEI 2013) allows for detecting mircogestures on any surface, various shapes, and generic devices through physically decoupling in- and output and attaching motion sensors on fingers for recognizing tiny finger movements as microgestures. Moreover we show that modelling the whole hand is possible with 8 sensor units and considering bio-mechanics (AH 2013). That allows for detecting any hand pose without suffering of occlusion or depending on certain light conditions. We finally discuss some hardware and gesture design challenges of our approach, but also show how our approach is a step into the direction of ubiquitous computing.
Currently I am a doctoral student in the Integrated Graduate Program in Human-Centric Communication (IGP H-C3) at TU Berlin, which is affiliated with the Telekom Innovation Laboratories. My research is on ergonomic design for gestural "busy-hand interfaces" and I am supervised by Sebastian Muller (T-Labs Berlin) and Michael Rohs (University of Hannover). During my postgraduate studies I was an exchange student at Glasgow University and was supervised there by Stephen Brewster. Furthermore I did internships at CSIRO Australia and at HITLab New Zealend where I was supervised by Mark Billinghurst. Before my research career, I was a docent of multimedia design at the University of Applied Sciences Berlin in 2009 and I also worked as an interaction designer for the Jewisn Museum Berlin. For my Masters, I studied design and communications at the University of the Arts, Berlin.
I will discuss the theoretical implications of the embodied perceptual and personal spaces in interactive tabletops and surfaces and presents InGrid, an Interactive Grid table. InGrid offers several affordances to the user that could not only interact with tangible and intangible objects but also with other users.
Mounia Ziat earned a B.A. in Electronics Engineering, a M.A and Ph.D in Technologies and Human Sciences. She also studied Arts for three years. She was a Postdoctoral Fellow at McGill University and at Wilfrid Laurier University and she served as a sessional lecturer at University of Guelph-Humber. Her research interests include haptic device design and HCI, human tactile and visual perception, cognitive neuroscience and BCI. She also enjoys scuba diving, camping, canoeing, and hiking. Mounia Ziat is an assistant professor at Northern Michigan University
Information systems are now being called upon not only to help us keep organized and productive but also to help build the fabric of the way we live. They might help us focus by reducing disruption, they might engage in various activies to help people enjoy others, or they might even try to give people increasing self-awareness. This talk will introduce notions of how we can introduce social awareness in our design practices and artifacts. Dr. Selker will introduce lines of research that proceed from creating and evaluation design sketches around recognizing and respecting human intention. These design sketches and their experiments strive to lead to and are part of creating a considerate cyber physical world. Dr. Gentes will reflect on how considerate systems fit into reflective technologies that use genres and create new ones. Using methods from humanities, such as analysis of textual and visual contents, or consideration of social literacies can be helpful to the design and evaluation of information technologies. Annie and Ted are now engaged in writing a book which develops these ideas further: extending examples and theories of how concepts like poetics extend our understanding of technology creation and use.
Ted Selker work is focused on bringing people together with technology. Ted Selker has been working to develop CMU Silicon Valley research program since 2009 where he runs the Considerate Computing group. He created and ran the Context Aware Computing group at MIT Media Lab research. At IBM he created the USER research group, drove inventions into products and became an IBM Fellow. He has also had research & teaching positions at Atari, PARC, Stanford and Hampshire College, etc. He consults to help companies from Google, Herman Miller and Pixar to startups. His successes at targeted product creation and enhancement have resulted in numerous products, awards, patents, and papers and have often been featured in the press.
Annie Gentes is professor of Information and Communication Sciences at Telecom ParisTech. She is the head of the Codesign and Media Studies Lab, which studies the invention and design of New Media and Information Technology. She teaches graduate courses in new media art, innovation and design. She is responsible for the master program: "Design, Media, Technology" with the University Pantheon Sorbonne and the Ecole Nationale Superieure de Creation Industrielle. She teaches political communication at the Institute of Political Sciences in Lille. Her research focuses on defining the characteristics of ICT as reflexive technologies. She currently works with The Louvre Museum, Grand Palais, Arts and Crafts Museum, Contemporary Art Museum Beaubourg, and other cultural and art institutions.
Wireless sensor networks (WSN) are a powerful type of instrument with the
potential of remotely observing the physical world at high resolution and scale.
However, to be an efficacious tool WSN monitoring systems must provide
guarantees on the quality of the observations and mechanisms for real-time
data analysis. These problems are made challenging by the WSN limited resources
and the high volume of sensor data. In this talk I will address both challenges and
illustrate two monitoring solutions, which are built on top of statistical models and
offer dynamic trade-offs between resource efficiency and quality of service.
More precisely, I will present an energy-efficient system, called SAF, for answering
on-line queries with quality guarantees, detecting at real-time anomalies, and
clustering similar nodes. The system relies on a class of simple time series models
built at sensors, which are cheap to learn and dynamically adapt to variations in the
data distribution to accurately predict sensor values and improve resource utilization.
SAF also provides an integrated fault detection system and a hierarchical scheme to
improve scalability in case of large-scale WSN.
SAF limited capabilities to face complex dynamics and data instability are overcome
by the SmartEnv real-time monitoring system. It remarkably improves SAF and previous
work for providing probabilistic guarantees on the service quality even in case of sensor
malfunctioning, data instability and temporal communication disruption, and for automatically
analyzing sensor data (e.g., trends, temporal-spatial correlations). SmartEnv is also able
to analyze anomalies, thus distinguishing between sensor malfunctioning and unexpected
variations in the phenomenon.
I will conclude my talk with an outline of some SmartEnv applications and future work,
particularly in the context of smart cities/grids.
Daniela Tulone holds a Ph.D. in Computer Science from University of Pisa
jointly with MIT, a M.S. in Computer Science from NYU, and a B.S. and M.S. degree in
Mathematics from University of Catania. She has worked in academia and research labs
(e.g., Bell-Labs, MIT, AT&T Labs, NYU, University of Pisa, C.N.R.), in R&D industry,
and recently at the Joint Research Center of the European Commission.
Her interests include wireless sensor networks, algorithms, secure distributed systems, the
design of dynamic trade-offs, smart grids, data analysis, and whatever involves a blend
of theory and applications
Benes networks are constructed with simple switch modules and have many advantages, including small latency and requiring only an almost linear number of switch modules. As circuit-switches, Benes networks are rearrangeably non-blocking, which implies that they are full-throughput as packet switches, with suitable routing. Routing in Benes networks can be done by time-sharing permutations. However, this approach requires centralized control of the switch modules and statistical knowledge of the traffic arrivals. We propose a backpressure-based routing scheme for Benes networks, combined with end-to-end congestion control. This approach achieves the maximal utility of the network and requires only four queues per module, independently of the size of the network.
Longbo Huang is currently an assistant professor in the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University, Beijing, China. He received his Ph. D. degree from the Electrical Engineering department at the University of Southern California in August 2011, and worked as a postdoctoral researcher in the Electrical Engineering and Computer Sciences department at the University of California at Berkeley from July 2011 to August 2012. He was also a visiting professor at the Institute of Network Coding at the Chinese University of Hong Kong from December 2012 to February 2013.
Prior to his Ph.D., Longbo received his B.E. degree from Sun Yat-sen (Zhongshan) University, Guangzhou, China, and his M.S. degree from Columbia University, New York City, both in EE. His research interests are in the areas of Stochastic Network Optimization, Data Center Networking, Smart Grid, Processing Networks and Queueing Theory.
Increasing the share of energy produced by wind turbines, solar photovoltaics, and solar thermal power plants helps society address many key energy challenges including climate change, environmental degradation, and energy security. However, these renewable energy resources present new challenges, notably that their power production is intermittent \u2013 they produce when the wind is blowing and the sun is shining, not necessarily when we need it. In this talk, I will describe how distributed flexible resources such as commercial buildings, residential electric loads, and distributed storage units can support high penetrations of intermittent renewables and provide other services that make the grid run more efficiently and make power less expensive. As an example, I will show how one can use Markov models and linear/nonlinear filtering techniques to centrally control aggregations of air conditioners to provide power system services with high accuracy, but low requirements for sensing and communications. Importantly, the approach ensures that control actions are non-disruptive to the consumer. I find that, depending on the performance required, loads may not need to provide state information to the central controller in real time or at all, which keeps installation and operation costs low. Additionally, I will discuss a number of practical issues including estimates of the resource size and revenue potential. Finally, I will describe several new research directions including modeling load aggregations as uncertain reserves for day-ahead power system planning and real-time security.
Johanna Mathieu is a postdoctoral researcher in the Power Systems Laboratory at ETH Zurich, Switzerland. In May 2012 she received her PhD in mechanical engineering from the University of California at Berkeley. She has a BS from MIT in ocean engineering and an MS from UC Berkeley in mechanical engineering.
In this talk I plan to do a quick walk-throuhg for part of the works we have done over the last 2.5 years. More specifically I will discuss what we have learned in packet forwarding area and a new perspective on the relation between routing and forwarding planes (or control and data planes, as they are called today). I will also describe a few new applications we have developed over NDN and what we have learned in the process.
Cellular wireless networks are expanded to support ultra high speed multimedia applications and incorporate the use of next generation energy efficient micro base stations in supporting much higher density of mobile stations. The operations of neighborhood base station nodes are coordinated in a cross-layer manner to mitigate interference, adapt to traffic rate fluctuations and autonomously react to failure events. In turn, mobile ad hoc networking mechanisms are investigated for multitude of applications, when the involved networking systems do not make use of a permanent backbone infrastructure. Hybrid architectures include systems that require dynamically adaptive multi hop access to backbone systems, and such that demand resilient access and rapid adaptations to communications system degradations. Applications are planned for hybrid systems that include vehicular networks that combine the use of multi hop ad hoc routing techniques, WiFi, cellular wireless access technologies, and cloud computing architectures. We will review our recent research developments and studies of such network systems, selecting from the following topics: 1. For cellular wireless networks : adaptive rate/power scheduling for multicasting and unicasting; micro base station aided resilient failover.; heterogeneous operations under the use of micro and macro base station layouts; design of resilient public safety network systems. 2. Vehicular ad hoc networks (VANETs): Location aware multicast packet distributions to highway vehicles using inter-vehicular wireless networking protocols. 3. Mobile ad hoc networks: mobile backbone networking (MBN).
Izhak Rubin received the B.Sc. and M.Sc. from the Technion, Israel, and the Ph.D. degree from Princeton University, Princeton, NJ, USA, in Electrical Engineering. Since 1970, he has been on the faculty of the UCLA School of Engineering and Applied Science where he is currently a Distinguished Professor in the Electrical Engineering Department. Dr. Rubin has had extensive research, publications, consulting, and industrial experience in the design and analysis of commercial and military computer communications and telecommunications systems and networks. Recent R&D projects include network design, simulation, management, and planning tools for network modeling and analysis, multi-task resource allocation, unmanned vehicle aided multi-tier ad hoc wireless communications networks; cross-layer adaptive power, rate and routing for mobile wireless cellular and ad hoc networks. He serves as co-director of the UCLA Public Safety Network Systems Laboratory. During 1979-1980, he served as Acting Chief Scientist of the Xerox Telecommunications Network. He served as co-chairman of the 1981 IEEE International Symposium on Information Theory; as program chairman of the 1984 NSF-UCLA workshop on Personal Communications; as program chairman for the 1987 IEEE INFOCOM conference; and as program co-chair of the IEEE 1993 workshop on Local and Metropolitan Area networks. Dr. Rubin is a Life Fellow of IEEE. He has served as an editor of the IEEE Transactions on Communications, Wireless Networks journal, Optical Networks magazine, IEEE JSAC issue on MAC techniques, Communications Systems journal, Photonic Networks Communications journal, and has contributed chapters to texts and encyclopedia on telecommunications systems and networks.
Most disruption-tolerant networking protocols focus on mere contact and intercontact characteristics to make forwarding decisions. We propose to relax such a simplistic approach and include multi-hop opportunities by annexing a node’s vicinity to its network vision. We investigate how the vicinity of a node evolves through time and whether such knowledge is useful when routing data. By analyzing a modified version of the pure WAIT forwarding strategy, we observe a clear tradeoff between routing performance and cost for monitoring the neighborhood. By observing a vicinity-aware WAIT strategy, we emphasize how the pure WAIT misses interesting end-to-end transmission opportunities through nearby nodes. For the datasets we consider, our analyses also suggest that limiting a node’s neighborhood view to four hops is enough to improve forwarding efficiency while keeping control overhead low. (Accepted at the Fifth IEEE INFOCOM International Workshop on Network Science for Communication Networks (IEEE NetSciCom 2013)
A centralized content distribution network (CDN) is a system in which users access contents stored in a few large data centers. On the contrary, in a distributed CDN, the contents are stored in a swarm of small servers, each with limited storage and service capacities. In such a setup, in it unclear how to operate the system in the best possible way and what its performances will be. Indeed, in a centralized CDN, the capacity of the system, in terms of number of requests that can be similtaneously served, is simply the total upload capacity of the data centers, as each data center has enough capacity to store the whole catalog of contents (or it would require little adjustement to make sure the requested contents are available). However, in a distributed CDN, we cannot assume that requested contents will always be stored in the available servers anymore and on-the-go adjustements would be too costly, and computing the capacity of the system becomes an issue. This problem is known as a load-balancing problem.
We model distributed CDNs via random bipartite graphs, whose distribution depends on how the network is operated, i.e. how the contents are replicated within the servers. We compute the capacity of such a system (i.e. the size of a maximum capacitated matching of the graph) in the limit of large systems, and then compare and optimize over diverse replication policies. The method used is the so-called cavity method from statistical physics, which involves the study of message passing algorithms (belief-propagation) on random graphs.
Nowadays, due to excessive queuing, delays on the Internet can grow longer than several round trips between the Moon and the Earth – for which the “bufferbloat” term was recently coined. Some point to active queue management (AQM) as the solution. Others propose end-to-end low-priority congestion control techniques (LPCC). Under both approaches, promising advances have been made in recent times: notable examples are CoDel for AQM, and LEDBAT for LPCC. In this paper, we warn of a potentially fateful interaction when AQM and LPCC techniques are combined: namely (i) AQM resets the relative level of priority between best effort and low-priority congestion control protocols; (ii) while reprioritization generally equalizes the priority of LPCC and TCP, we also find that some AQM settings may actually lead best effort TCP to starvation. By an extended set of experiments conducted on both controlled testbeds and on the Internet, we show the problem to hold in the real world for any tested combination of AQM policies and LPCC protocols. To further validate the generality of our findings, we complement our experiments with packet-level simulation, to cover cases of other popular AQM and LPCC that are not available in the Linux kernel. To promote cross-comparison, we make our scripts and dataset available to the research community. Joint work with Dario Rossi, Claudio Testa, Silvio Valenti and Dave Taht.
Basic system of linear equations has attracted many research efforts. Various methods have been considered to solve this kind of equation, e.g. Jacobi, Gauss-Seidel, Successive Over-Relaxation, Generalized Minimal Residual and, recently proposed, D-Iteration. They are, of course, different in memory requirement, computation cost and thus convergence speed. In this paper, we dig into detail some criteria concerning the costs and compare those methods in different types of large matrices (e.g. web graphs, social network graphs, directed/undirected graph). The results presented in this paper are a first step to look for an algorithm which is suitable for very large matrices, in either sequential or parallel environment. Joint work with Dohy Hong, Gerard Burnside, Fabien Mathieu
We consider a non-cooperative Dynamic Spectrum Access (DSA) game where Secondary Users (SUs) access opportunistically the spectrum licensed for Primary Users (PUs). As SUs spend energy for sensing licensed channels, they may choose to be inactive during a given time slot in order to save energy. Then, there exists a tradeoff between large packet delay, partially due to collisions between SUs, and high-energy consumption spent for sensing the occupation of licensed channels. To overcome this problem, we take into account packet delay and energy consumption into our framework. Due to the partial spectrum sensing, we use a Partial Observable Stochastic Game (POSG) formalism, and we analyze the existence and some properties of the Nash equilibrium using a Linea
r Program (LP). We identify a paradox: when licensed channels are more occupied by PUs, this may improve the spectrum utilization by SUs. Based on this observation, we propose a Stackelberg formulation of our problem where the network manager may increase the occupation of licensed channels in order to improve the SUs average throughput. We prove the existence of a Stackelberg equilibrium and we provide some simulations that validate our theoretical findings.
The steady rise of user-traffic in wireless cellular networks has resulted in the need for developing robust and accurate models of various performance metrics. A key metric of these networks is the signal-to-interference-and-noise-ratio (SINR) experienced by a typical user. For tractability, often the positions of base stations in such networks are modelled by Poisson point processes whereas actual deployments often more resemble lattices (e.g. hexagonal). Strikingly, under log-normal shadowing it has been observed that the SINR experienced by a typical user is more accurate in a Poisson model than a hexagonal model. In this talk we seek to explain this interesting observation by way of a convergence result. Furthermore, we present numerically tractable, explicit integral expressions for the distribution of SINR of a cellular network modelled by Poisson process. Our model incorporates a power-law path-loss model with arbitrarily distributed shadowing. The results are valid in the whole domain of SINR and, unlike previous methods, do not require the inversion of Laplace transforms.
Based on joint work in collaboration with B. Blaszczyszyn and M.K. Karray.
It seems to be widely accepted that designing correct and highly
concurrent software is a sophisticated task that can only be held by
experts. A crucial challenge is therefore to convert sequential code
produced by a mainstream programmer into concurrent one. Using
synchronization techniques, such as locks or transactional memory, we
tackle the problem of wrapping a sequential implementation into a highly
concurrent one that looks sequential to every thread.
We evaluate the amount of concurrency provided by the resulting
implementations via the set of schedules (interleavings of steps of the
sequential code) they accept. We start with two synchronization
techniques: pessimistic and relaxed on the one hand (such as ne-grained
locking) and optimistic and strongly consistent on the other hand (such
as conventional transactional memories). We show that they are
incomparable in that each one may accept schedules not accepted by the
other. However, we show that the combination of relaxed consistency and
optimism strictly supersedes both pessimistic and strongly consistent
approaches.
Bandwidth-sharing networks, as introduced by Massoulie & Roberts (1998), model the dynamic interaction among an evolving population of elastic flows that compete for several links. The main area of application of such models is telecommunications media, e.g. Internet congestion control. With policies based on optimization procedures, bandwidth-sharing networks are of interest both from a Queueing Theory and Operations Research perspective.
In this work, we focus on the regime when link capacities and arrival rates are of a large order of magnitude compared to transfer rates of individual flows, which is standard in practice. Bandwidth-sharing networks are rather complicated systems (we operate with measure-valued processes to study them), and under general structural and stochastic assumptions they resist exact analysis. So we resort to fluid limit approximations. Under general assumptions, we derive the fluid limit for the network evolution in the entire time horizon (extending the corresponding result by Reed and Zwart (2010) for Markovian assumptions). Also, for a wide class of networks, we develop polynomial-time computable fixed-point approximations for their stationary distributions.
In near future, a car will be equipped with a variety sensors and wireless interfaces such as 3G/LTE, WiMAX, WiFi, or DSRC/WAVE. Our vision is to enable vehicles to communicate with each other and with the infrastructure over any and all physical communication channels, as soon as any channel comes into existence and as long as it is available. Although over the years many research papers have been published for automotive research, in reality by and large today’s vehicles are connected only through cellular networks to centralized servers. Automotive research such as ad-hoc networking and delay tolerant networking are still far from completion and less likely to deploy.
We believe the root cause of this insoluble problem in networking vehicles is IP’s communication model, where IP creates its own name space, the IP address space, assigns IP addresses to every communicating end point, and then encapsulates each piece of application data into an IP packet. This whole process insulates applications from data delivery layer.
Taking the named-data networking (NDN) as the starting point, we are developing V-NDN, a single framework to realize our vision. NDN identifies named data as the focal point in communication. Utilizing the fact that all data communications happen within established application context, and that nodes running the same applications decide what data they want to get, NDN lets individual nodes to request the desired data using application data names directly.
Data names are from applications, they identify data directly; they exist once applications are running, independent from time-varying connectivity in an ad hoc environment. This enable data to exist in the absence of connectivity, and to be exchanged over any physical connectivity once it comes into existence.
We have designed and developed V-NDN and demonstrated that our design indeed allowed vehicles to utilize all available channels to communicate; they can effectively communicate with centralized servers as well as with each other to exchange application data in completely ad hoc manner. In this talk we will go over the design choices and the preliminary results from our deployment.
The integration of content oriented functionalities in network equipments is of critical importance for the deployment of future content delivery infrastructures. However, as today\u2019s high-speed network equipments are mostly designed to carry traffic from one location of the network to another by means of IP address information, this integration imposes severe changes to hardware and software technologies. In this talk we focus on system design and evaluation of the basic building blocks enabling content-oriented communication primitives in network equipments. Specifically, we present challenges and solutions to realize name-based forwarding, packet-level storage management and content-based temporary state at high speed. Finally, we briefly introduce our on-going work on the implementation of our solutions on high-speed hardware and software platforms, and on the evolution of our designs to support enhanced content-oriented mechanisms.
I will start with an introduction on compressed sensing. Then I will talk about approximate message passing algorithms and their connection to the universality of a certain phase transition arising in polytope geometry and compressed sensing. Joint work with M. Bayati and A. Montanari (Stanford)
IPv4 address exhaustion triggered two parallel efforts: the migration to IPv6, and the deployment of IPv4 service continuity technologies, for the most part based on Carrier-Grade NAT. This talk will provide a broad overview of the current situation with IPv6 migration, will explain what the major actors are currently doing, and will try to provide some insight about the future. In particular, CGN and its impacts will be examined. Related standardization efforts will be discussed as well.
Simon Perreault is a network engineering consultant with Viagenie, a Canadian firm.
Network Service Providers alliances are envisioned to emerge in the near future as a means of selling end-to-end quality assured services through interdomain networks. Several aspects of such a community must be discussed in order to assure its economical and technical viability. Within the economic ones, pricing and revenue sharing are key aspects on which all alliance’s members must agree. In this context, we present a framework where services are sold via first-price auctions and incomes are formulated as the solution of a Network Utility Maximization problem. Then, we discuss the revenue sharing problem in such context, express the desirable properties of a revenue sharing method, argue why the existing methods are not suitable, and propose a family of solutions.
Isabel Amigo was born in January 23th, 1984 in Montevideo, Uruguay. Since 2007 she holds a degree in Electrical Engineering from Universidad de la Republica, Uruguay. Since 2006 she is a Research and Teaching assistant at the Electrical Engineering department of the School of Engineering, Universidad de la Republica, Uruguay. In late 2007 she joins the technical team of the National digital inclusion plan Plan Ceibal , where she is later on in charge of the Projects department and the Research and Development area. On March 2010 she leaves Plan Ceibal in order to pursue a PhD thesis. She is currently a PhD candidate at Telecom Bretagne (France) and Universidad de la Republica (Uruguay), in a co-advised program, under the advisory of Prof. Sandrine Vaton and Prof. Pablo Belzarena.
The French Network and Information Security Agency (FNISA) and the
French Network Information Center (AFNIC) have recently published a
detailed report on the French Internet resilience. This document
is based on measures performed for two key protocols for the Internet :
Border Gateway Protocol (BGP) and Domain Name System (DNS).
In order to evaluate the resilience of BGP, several indicators have been
used:
(i) the correct declaration of routing information to regional information
registries which is necessary to check messages received by routers;
(ii) the connectivity between operators in order to evaluate the risk of
a full disconnection;
(iii) the frequency of prefix hijacking by which an operator announces or
relays illegitimate routing information.
In order to estimate this indicators, the FNISA has analyzed all BGP
messages of four major French network operators over a period of 11 months.
Concerning the DNS protocol, the considered indicators are:
(i) the distribution of name servers among countries and operators;
(ii) the number of unpatched servers still vulnerable to the Kaminsky
vulnerability;
(iii) the deployment of protocols such as IPv6 and DNS Security Extensions
(DNSSEC).
Measures have been performed in two ways: active measures on DNS servers
of .fr domains and passive measures by observing the traffic on
authoritative servers of .fr administrated by AFNIC.
The aim of this talk is to present the results of this first report and
to introduce the new indicators and results which have been obtained
since then. In the same time, we will introduced several open
theoritical problems linked to this study.
In the constant search for simple and efficient routing algorithms,
a particularly attractive approach is greedy routing: every node
is assigned a coordinate, and routing consists of simply forwarding to
the neighbor closest to the destination. Unfortunately, in our
(Euclidean) world, this strategy is prone to failure. In this talk
I will describe how a small change – working in Hyperbolic space instead
of Euclidean space – allows one to guarantee that greedy routing is always
successful.
I will review the properties of hyperbolic space for this problem, and
then
describe how to use it for provably successful greedy routing on a static
graphs.
Next I will describe how to extend this approach to growing graphs.
Finally,
I will discuss strategies to make this sort of routing efficient – that,
to
ensure that paths through the network are short.
This is joint work with Andrej Cvetkovski
Despite the steady and significant increase in Internet traffic, today’s Internet topologies are stable, reliable, and - underutilized. This is mainly due to the proven ISP practice to overprovision IP links, - often up to seventy percent, thus allowing for accommodation of unpredictable flows, while keeping the network stable and resilient against attacks and failures. Even with major benefits, the practice of overprovision leads to high capital and operational expenses, as well as higher energy consumption. At first glance, dynamic optical circuits can address the downsides: optical circuits can be setup to bypass any congested or faulty IP links and are one of the "greenest" technologies. However, the two networks – optical and Internet, have evolved as two fundamentally different and separately managed systems, and cannot easily operate in harmony. In this talk, I will discuss why past approaches to deploy dynamic optical circuits have not been widely adopted for IP routing, - from the point of view of network management, and present our research ideas on how Internet can effectively use optical circuits. I will talk about the importance of new systems research and present new directions in theoretical studies in this field, and also give a practical outlook from our EU Project ONE (http://www.ict-one.eu).
Admela Jukan received the M.Sc. degree in Information Technologies from the Politecnico di Milano, Italy, and the Dr. techn. degree (cum laude) in Electrical and Computer Engineering from the Technische Universitat Wien, Austria. She received her Dipl. -Ing. degree from the Fakultet Elektrotehnike i Racunarstva (FER), in Zagreb, Croatia.
She is Chair Professor of Communication Networks in Electrical and Computer Engineering Department at the Technische Universitat Braunschweig in Germany. Prior to coming to TU Braunschweig, she was research faculty at the Institut National de la Recherche Scientifique (INRS), University of Illinois at Urbana Champaign (UIUC) and Georgia Tech (GaTech). In 1999 and 2000, she was a visiting scientist at Bell Labs, Holmdel, NJ. From 2002-2004, she served as Program Director in Computer and Networks System Research at the National Science Foundation (NSF) in Arlington, VA.
Dr. Jukan serves as Associate Technical Editor for IEEE Communications Magazine and IEEE Network. She is a co-Editor in Chief of the Elsevier Journal on Optical Switching and Networking (OSN). She is an elected Vice Chair of the IEEE Optical Network Technical Committee, ONTC (Chair in 2014). She currently coordinates a collaborative EU project ONE, focusing on network management convergence of optical networks and the Internet. She is recipient of an Award of Excellence for the BMBF/CELTIC project "100Gb Ethernet" and was also awarded the IBM Innovation Award for applications of parallel computing for rich digital media distribution over optical networks.
The Smart Grid is the evolution of this complex network, with one of the major factors being the integration of Machine-to-Machine type communications in every element of this network. This talk will concentrate on presenting the ongoing and future projects related to Smart Grids in Telecom Bretagne, such as networking protocols for Home Area Networks (HAN) and Neighborhood Area Networks (NAN).
Alexander Pelov is an Associate Professor of Computer Networks in the "Networking, Multimedia and Security" department at the Graduate Engineering School Telecom Bretagne, France. His research focuses on networking protocols for Machine-to-Machine communications, energy efficiency in wireless networks, and protocols and algorithms for Smart Grid applications, most notably related to Smart Meters, sub-metering and Electrical Vehicles. He received his M.Sc. (2005) from the University of Provence, France and Ph.D. (2009) from the University of Strasbourg, France, both in Computer Science.
We address the problem of designing distributed Multiple Access Control algorithms for wireless networks under the SINR interference model. In the proposed framework, time is divided into frames consisting of a fixed number of slots, and transmitters may adapt the power levels used in the various slots. We aim at developing fully distributed multiple access algorithms that are throughput-optimal in the sense that they perform as well as centralized scheduling algorithms. These algorithms based on a simple power control mechanism, referred to as Power Packing. This mechanism allows each transmitter to tune their power levels in the different slots so as to achieve a target rate while minimizing the number of slots actually used. The proposed algorithms are throughput-optimal, simple and do not require any message passing: each transmitter adapts its power levels depending on the observed interference levels in the various slots. We illustrate the efficiency of our algorithms using numerical experiments.
Auctions have regained interest from researchers due to its different
new applications (Google AdWords auctions, cloud computing auctions,
privacy auctions, and white spaces spectrum auctions). In this work in
particular we explore auctions for spectrum that can be allocated
either to a single bidder (for licensed use) or to a collection of
bidders (for unlicensed use). In this auction, a number of individual
bidders all try to get the spectrum for their exclusive use and a
group of other bidders try to get the spectrum for their collective
use. The objective is to study these auctions and compare their
different properties.
For analyzing network performance issues, there can be great utility in having the capability to measure directly from the perspective of end systems. Because end systems do not provide any external programming interface to measurement functionality, obtaining this capability today generally requires installing a custom executable on the system, which can prove prohibitively expensive. In this work we leverage the ubiquity of web browsers to demonstrate the possibilities of browsers themselves offering such a programmable environment. We present Fathom, a Firefox extension that implements a number of measurement primitives that enable websites or other parties to program network measurements using JavaScript. Fathom is lightweight, imposing < 3.2% overhead in page load times for popular web pages, and often provides 1 ms timestamp accuracy. We demonstrate Fathom’s utility with three case studies: providing a JavaScript version of the Netalyzr network characterization tool, debugging web access failures, and enabling web sites to diagnose performance problems of their clients.
For high (N)-dimensional feature spaces, we consider detection of an unknown, anomalous class of samples amongst a batch of collected samples (of size T), under the null hypothesis that all samples follow the same probability law. Since the features which will best identify the anomalies are a priori unknown, several common detection strategies are: 1) evaluating atypicality of a sample (its p-value) based on the null distribution defined on the full N-dimensional feature space; 2) considering a (combinatoric) set of low order distributions, e.g., all singletons and all feature pairs, with detections made based on the smallest p-value yielded over all such low order tests. The first approach relies on accurate estimation of the joint distribution, while the second may suffer from increased false alarm rates as N and T grow. Alternatively, inspired by greedy feature selection commonly used in supervised learning, we propose a novel sequential anomaly detection procedure with a growing number of tests. Here, new tests are (greedily) included only when they are needed, i.e., when their use (on currently undetected samples) will yield greater aggregate statistical significance of (multiple testing corrected) detections than obtainable using the existing test cadre. Our approach thus aims to maximize aggregate statistical significance of all detections made up until a finite horizon. Our method is evaluated, along with supervised methods, for a network intrusion domain, detecting Zeus bot command-and-control (i.e., intrusion) packet flows embedded amongst (normal) Web flows. It is shown that judicious feature representation is essential for discriminating Zeus from Web. This work in collaboration with D.J. Miller and F. Kocak.
George Kesidis received his M.S. and Ph.D. in EECS from U.C. Berkeley in 1990 and 1992 respectively. He was a professor in the E&CE Dept of the University of Waterloo, Canada, from 1992 to 2000. Since 2000, he has been a professor of CSE and EE at the Pennsylvania State University. His research, including several areas of computer/communication networking and machine learning, has been primarily supported by NSERC of Canada, NSF and Cisco Systems URP. He served as the TPC co-chair of IEEE INFOCOM 2007 among other networking and cyber security conferences. He has also served on the editorial boards of the Computer Networks Journal, ACM TOMACS and IEEE Journal on Communications Surveys and Tutorials. Currently, he is an Intermittent Expert for the National Science Foundation’s Secure and Trustworthy Cyberspace (SaTC) program. His home page is http://www.cse.psu.edu/ kesidis
Twitter produces several millions of short texts per hour. Monitoring information tendencies has become a key business. In particular the content provider can detect in advance which movie will be popular and move it toward proxies before it is too late (ie causes a network/server congestion). In this talk we present the analysis of short texts via joint complexity. The joint complexity of two texts is the number of distinct factors common to both texts. When the source models of the texts are close then the joint complexity is higher. This technique is applied to DNA sequence analysis because it has a very low overhead. It can now be applied to short text analysis thanks to more accurate theoretical estimate. In particular we show new theoretical results when the sources that generate the texts are Markovian of finite order, a model that particularly fits well with text generation. Joint work with W Szpankowski, D Milioris, B. Berde.
In the current work we deal with the problem of base station cooperation in the downlink of infinite wireless cellular networks.
The positions of base stations are modeled by a Poisson point process. Each base station can choose to cooperate or not with exactly one
of its Delaunay neighbours in order to provide service to a user located within its cell. The cooperation protocol uses a variation of the so-called Willems’ encoder
and a fixed total transmission power per user is considered.
We analytically derive closed form expressions for the coverage probability and we determine the optimal cooperation zones in the network.
Numerical evaluation shows benefits in coverage, compared to the common cellular architecture. These however are not very high due to the
deterioration in SINR caused by increased outer-cell total interference. Joint work with F. Baccelli.
Many popular sites, such as Wikipedia and Tripadvisor, rely on public participation to gather information—a process known as crowd data sourcing. While this kind of collective intelligence is extremely valuable, it is also fallible, and policing such sites for inaccuracies or missing material is a costly undertaking. In this talk we will examine how database technology can be put to work to effectively gather information from the public, efficiently moderate the process, and identify questionable input with minimal human interaction. We will consider the logical, algorithmic, and methodological foundations for the management of large scale crowd-sourced data as well as the the development of applications over such information.
Voting systems can be used in any situation where several entities are to make a decision together. However, the sincere vote may lead to a situation that is not a generalized Nash equilibrium: a group of electors can hide their sincere preferences in order to change the outcome to a candidate they prefer. In that case, we say that the situation is manipulable (i.e. susceptible to tactical voting). Gibbard-Satterthwaite theorem (1973) states that for 3 candidates and more, all voting systems but dictatorship are vulnerable to manipulation. So we would like to know, amongst "reasonable" voting systems, which ones are manipulable with a probability as little as possible. We show that, under quite weak assumptions on the meaning of "reasonable", such optimal voting systems can be found in the class of systems that depend only on the electors’ preorders of preferences over the candidates and meet Condorcet criterion.
Shannon’s fundamental bound for perfect secrecy stated that the entropy of the secret message U cannot be larger than the entropy of the secret key R shared by the sender and the legitimated receiver. Massey gave an information-theoretic proof of this result and the proof did not require U and R to be independent. By adding an extra assumption that I(U;R) = 0, we show a tighter bound on H(R) in this talk. Our bound states that the logarithm of the message sample size cannot be larger than the entropy of the secret key. We also consider the case that a perfect secrecy system is used multiple times. A new parameter, namely expected key consumption, is defined and justified. We show the existence of a fundamental trade-off between the expected key consumption and the number of channel uses for transmitting a cipher-text. A coding scheme, which is optimal under certain conditions, is introduced.
While the Internet has succeeded far beyond expectations, the
success has also stretched its initial design assumptions. Since
applications operate in terms of data and more end points become
mobile, it becomes increasingly difficult and inefficient to satisfy
IP’s requirement of determining exactly where (at which IP address)
to find desired data. The Named Data Networking project aims to
carry the Internet into the future through a conceptually simple yet
transformational architecture shift, from today’s focus on where –
addresses and hosts – to what – the data that users and
applications care about. In this talk I will present the basic
design of NDN and our initial results.
Information is the distinguishing mark of our era, permeating every
facet of our lives. An ability to understand and harness information has
the potential for significant advances. Our current understanding of
information dates back to Claude Shannon’s revolutionary work in 1948,
resulting in a general mathematical theory of reliable communication
that not only formalized the modern digital communication and storage
principles but also paved the way for the Internet, DVDs and iPods of today.
While Shannon’s information theory has had profound impact, its application
beyond storage and communication poses foundational challenges. In 2010
the National Science Foundation established the Science & Technology
Center for the Science of Information to meet the new challenges posed by
the rapid advances in networking, biology and knowledge extraction.
Its mission is to advance science and technology through a new
quantitative understanding of the representation, communication and
processing of information in biological, physical, social and engineering
systems. Purdue University leads nine partner institutions: Berkeley,
Bryn Mawr, Howard, MIT, Princeton, Stanford, Texas A&M, UCSD, and UIUC
(cf. http://cacm.acm.org/magazines/2011/2/104389-information-theory-after-shanno
n/fulltext).
In this talk, after briefly reviewing main results of Shannon,
(cf. http://cacm.acm.org/magazines/2011/2/104389-information-theory-after-shanno
n/fulltext).
In this talk, after briefly reviewing main results of Shannon,
we attempt to identify some features of information
encompassing structural, spatio-temporal, and semantic facets of
information. We present two new results: One on a fundamental
lower bound for structural compression and a novel algorithm
achieving this lower bound for graphical structures.
Second, on the problem of deinterleaving Markov processes over
disjoint finite alphabets, which have been randomly
interleaved by a finite-memory switch.
Wojciech Szpankowski is Saul Rosen Professor of Computer Science and
(by courtesy) Electrical and Computer Engineering at Purdue University
where he teaches and conducts research in analysis of algorithms,
information theory, bioinformatics, analytic combinatorics, random structures,
and stability problems of distributed systems.
He received his M.S. and Ph.D. degrees in Electrical and Computer
Engineering from Gdansk University of Technology.
He held several Visiting
Engineering from Gdansk University of Technology.
He held several Visiting
Professor/Scholar positions, including McGill University, INRIA, France,
Stanford, Hewlett-Packard Labs, Universite de Versailles, University of
Canterbury, New Zealand, Ecole Polytechnique, France, and the Newton Institute,
Cambridge, UK. He is a Fellow of IEEE, and the Erskine Fellow.
In 2010 he received the Humboldt Research Award.
In 2001 he published the book "Average Case Analysis of
Algorithms on Sequences", John Wiley & Sons, 2001.
He has been a guest editor and an editor of technical
journals, including Theoretical Computer Science, the
ACM Transaction on Algorithms, the IEEE Transactions on
Information Theory, Foundation and Trends
in Communications and Information Theory,
Combinatorics, Probability, and Computing, and Algorithmica.
In 2008 he launched the interdisciplinary Institute for Science of
Information, and in 2010 he became the Director of the newly established NSF
Science and Technology Center for Science of Information.
Nearly three decades after it was first diagnosed, the persistently full buffer problem, recently exposed as part of bufferbloat is still with us and made increasingly critical by two trends. First, cheap memory and a more is better mentality have led to the inflation and proliferation of buffers. Second, dynamically varying path characteristics are much more common today and are the norm at the consumer Internet edge. Reasonably sized buffers become extremely oversized when link rates and path delays fall below nominal values.
The solution for persistently full buffers, AQM (active queue management), has been known for two decades but has not been widely deployed because of implementation difficulties and general misunderstanding about Internet packet loss and queue dynamics. Unmanaged buffers are more critical today since buffer sizes are larger, delay-sensitive applications are more prevalent, and large (streaming) downloads common. The continued existence of extreme delays at the Internet edge can impact its usefulness and hamper the growth of new applications.
This article aims to provide part of the bufferbloat solution, proposing an innovative approach to AQM suitable for today Internet called CoDel (for Controlled Delay,). This is a no-knobs AQM that adapts to changing link rates and is suitable for deployment and experimentation in Linux-based routers (as well as silicon).
This talk was given about a week before by Van Jacobson at IETF84 (see http://recordings.conf.meetecho.com/Recordings/watch.jsp?recording=IETF84_TSVAREA&chapter). Van Jacobson’s talk is a wonderful introduction to the core concepts of codel, Dave Taht’s talk goes into more detail about the history of AQMs, and fq_codel and the ongoing research into the problem-set for de-bufferbloating wireless and protocols such as BitTorrent. Further information about CoDel at: http://www.bufferbloat.net/projects/codel/wiki
In this talk we will first present what is a SINR diagram and why it is important to the design of efficient algorithms for wireless networks. Then we will discuss the reception zones of a wireless network in the SINR model with receivers that employ interference cancellation (IC). IC is a recently developed technique that allows a receiver to decode interfering signals, and cancel them from the received signal in order to decode its intended message. We first derive the important topological properties of the reception zones and their relation to high-order Voronoi diagrams and other geometric objects. We then discuss the computational issues that arise when seeking an efficient description of the zones. Our main fundamental result states that although potentially there are exponentially many possible cancellation orderings, and as a result, reception zones, in fact there are much fewer nonempty such zones. We prove a linear bound (hence tight) on the number of zones and provide a polynomial time algorithm to describe the diagram. Moreover, we introduce a novel parameter, the Compactness Parameter, which influences the tightness of our bounds. We then utilize these properties to devise a logarithmic time algorithm to answer point-location queries for networks with IC.
This work was published in the Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’12). Join work with: Chen Avin, Asaf Cohen, Erez Kantor, Zvi Lotker, Merav Parter and David Peleg.
Yoram Haddad received his BSc, Engineer diploma and MSc (Radiocommunications) from SUPELEC in 2004 and 2005, and his PhD in computer science and networks from Telecom ParisTech in 2010. Since 2010 he is a tenure-track senior lecturer (Assistant Professor) at the Jerusalem College of Technology (JCT) in Jerusalem, Israel. In parallel, since 2011 he is a post-doctoral research associate at the Ben-Gurion University (BGU) in Beer-Sheva, Israel.
Yoram’s main research interests are in the area of Wireless Networks and Algorithms for networks. He is specifically interested in energy efficient wireless deployment, Femtocell, modeling of wireless networks, wireless application to Intelligent Transportation Systems (ITS) and more recently Wireless Software Defined Networks (SDN).
La sécurité des systèmes d’information repose sur la conception de composants ayant pour objectif d’assurer la protection de ces systèmes contre les différentes attaques potentielles. Aujourd’hui, de nombreux composants de sécurité logiciels ou matériels sont disponibles tels que les protocoles cryptographiques, les infrastructures de gestion de clés publiques (PKI), les pare-feux, les contrl̂eurs d’accès aux systèmes d’exploitation et aux applications, les systèmes de détection d’intrusion (IDS) et les mécanismes anti-viraux. Cependant, pour que ces différents composants de sécurité soient efficaces, il convient de définir une politique de sécurité globale au système d’information à protéger. Il faut ensuite appliquer une méthodologie formelle permettant de configurer ces différents composants. Actuellement, une telle méthodologie n’existe pas et les administrateurs de sécurité sont obligés de configurer manuellement et séparément les différents composants de sécurité. Outre les problèmes d’incohérence, les risques d’erreur sont de deux types : configuration trop restrictive ne permettant pas aux utilisateurs autorisés de réaliser les activités dont ils ont le responsabilité ou bien politique de sécurité trop permissive créant des failles de sécurité. Dans ce contexte, nous montrerons comment apporter des solutions innovantes et efficaces aux problèmes suivants: Expression formelle de politiques de sécurité; Déploiement de politiques de sécurité; Analyse de politiques de sécurité; Administration de politiques de sécurité; Réaction aux intrusions basée sur le redéploiement de la politique. Nous nous intéressons également aux nouvelles approches pour externaliser et mutualiser des données en préservant les exigences exprimées dans la politique de sécurité. Lorsque ce ne sont pas les valeurs des attributs qui sont sensibles mais les associations entre ces valeurs, l’idée est de partitionner les données en cassant ces associations et de ne chiffrer certaines données que lorsque la fragmentation ne suffit pas. Nous discuterons la complémentarité de cette approche avec différentes propositions actuelles telles que les fonctions de chiffrement homomorphique pour effectuer des calculs (addition, multiplication) sur données chiffrées, ainsi que des primitives de searchable encryption qui permettent de rechercher la présence de mots-clés dans des documents chiffrés.
Many applications (routers, traffic monitors, firewalls,
etc.) need to send and receive packets at line rate even on
very fast links. In this paper we present netmap, a novel
framework that enables commodity operating systems
to handle the millions of packets per seconds traversing
1..10 Gbit/s links, without requiring custom hardware or
changes to applications.
In building netmap, we identified and successfully reduced
or removed three main packet processing costs:
per-packet dynamic memory allocations, removed by
preallocating resources; system call overheads, amortized
over large batches; and memory copies, eliminated
by sharing buffers and metadata between kernel
and userspace, while still protecting access to device registers
and other kernel memory areas. Separately, some
of these techniques have been used in the past. The novelty
in our proposal is not only that we exceed the performance
of most of previouswork, but also that we provide
an architecture that is tightly integrated with existing operating
system primitives, not tied to specific hardware,
and easy to use and maintain.
netmap has been implemented in FreeBSD and Linux
for several 1 and 10 Gbit/s network adapters. In our prototype,
a single core running at 900 MHz can send or
receive 14.88 Mpps (the peak packet rate on 10 Gbit/s
links). This is more than 20 times faster than conventional
APIs. Large speedups (5x and more) are also
achieved on user-space Click and other packet forwarding
applications using a libpcap emulation library running
on top of netmap.
This work received the Best paper award at USENIX ATC 2012
Transport network infrastructures are soon being replaced from legacy SDH/SONET based on packet based that can be seamlessly integrated with optical networks. We present a packet based, carrier-class network architecture, system and communication method that facilitates collapsing multiple Internet layers into a transport element. This Carrier Ethernet Switch router is based on the principle of binary and source routing leading to extremely low energy consumption, low-latency, small foot-print and support emerging services such as mobile backhaul, cloud computing, metro transport and data-center. We discuss the conceptual design, implementation and analysis of this router as well as future roadmap. We will also showcase deployment in a real network and how this can benefit the larger telecommunication community.
Ashwin Gumaste is currently the Institute Chair Associate Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology (IIT) Bombay. He is currently also a consultant to Nokia Siemens Networks, Munich where he works on optical access standardization efforts. From 2008-2012 he was also the J. R. Isaac Chair Assistant Professor. He was a Visiting Scientist with the Massachusetts Institute of Technology (MIT), Cambridge, USA in the Research Laboratory for Electronics from 2008 to 2010. He was previously with Fujitsu Laboratories (USA) Inc in the Photonics Networking Laboratory (2001-05). He has also worked in Fujitsu Network Communications R&D (in Richardson TX) and prior to that with Cisco Systems in the Optical Networking Group (ONG). His work on light-trails has been widely referred, deployed and recognized by both industry and academia. His recent work on Omnipresent Ethernet has been adopted by tier-1 service providers and also
resulted in the largest ever acquisition between any IIT and the industry. This has led to a family of transport products. Ashwin has 20 granted US patents and over 30 pending patent applications. Ashwin has published about 120 papers in referred conferences and journals. He has also authored three books in broadband networks called DWDM Network Designs and Engineering Solutions (a networking bestseller), First-Mile Access Networks and Enabling Technologies and Broadband Services: User Needs, Business Models and Technologies for John Wiley. Owing to his many research achievements and contributions, Ashwin was awarded the Government of India DAE-SRC Outstanding Research Investigator Award in 2010 as well as the Indian National Academy of Engineering (INAE) Young Engineer Award (2010). He has served Program Chair, Co-chair, Publicity chair and workshop chair for IEEE conferences and as Program Committee member for IEEE ICC, Globecom, OFC, ICCCN, Gridnets etc. Ashwin is
also a guest editor for IEEE Communications Magazine, IEEE Network and the founding Editor of the IEEE ComSoc ONT newsletter Prism. He is the Chair of the IEEE Communication Society’s Technical Committee on High Speed Networks (TCHSN) 2011-2013. He has been with IIT Bombay since 2005 where he convenes the Gigabit Networking Laboratory (GNL): www.cse.iitb.ac.in/gnl. The Gigabit Networking Laboratory has secured over 11 million USD in funding since its inception and has been involved in 4 major technology transfers to the industry.
For your thoughts to be useful, they must be enacted. This is true even for mundane thoughts like those for getting up out of your chair, leaving your home or office, coming to the room where this talk will be held, making your way to your seat, and settling in to hear about research on the planning and control of everyday actions. The research to be described will draw on evidence from neurophysiology, behavioral science, and computational modeling. A view that all these lines of evidence support is that goal postures are specified before movements are planned and performed. Motor control, you will hear, is more cognitively rich than some have realized.
See also: http://paris.sigchi.acm.org/
Geolocated social networks, that combine traditional social networking features with geolocation information, have grown tremendously over the last few years. Yet, very few works have looked at implementing geolocated social networks in a fully distributed manner, a promising avenue to handle the growing scalability challenges of these systems. In this talk, I will focus on georecommendation, and show that existing decentralized recommendation mechanisms perform in fact poorly on geodata. I will present a set of novel gossip-based mechanisms to address this problem, which we have captured in a modular similarity framework called Geology. The resulting platform is lightweight, efficient, and scalable, and I will illustrate its superiority in terms of recommendation quality and communication overhead on a real data set of 15,694 users from Foursquare, a leading geolocated social network.
This is joint work with Anne-Marie Kermarrec (INRIA Rennes, France) and Juan M. Tirado (Universidad Carlos III, Spain)
This paper considers large scale distributed content service
platforms, such as peer-to-peer video-on-demand systems.
Such systems feature two basic resources, namely storage
and bandwidth. Their efficiency critically depends on two
factors: (i) content replication within servers, and (ii) how
incoming service requests are matched to servers holding requested
content. To inform the corresponding design choices,
we make the following contributions.
We first show that, for underloaded systems, so-called pro-
portional content placement with a simple greedy strategy
for matching requests to servers ensures full system efficiency
provided storage size grows logarithmically with the system
size. However, for constant storage size, this strategy undergoes
a phase transition with severe loss of efficiency as
system load approaches criticality.
To better understand the role of the matching strategy in
this performance degradation, we characterize the asymptotic
system efficiency under an optimal matching policy.
Our analysis shows that -in contrast to greedy matching-
optimal matching incurs an inefficiency that is exponentially
small in the server storage size, even at critical system
loads. It further allows a characterization of content replication
policies that minimize the inefficiency. These optimal
policies, which differ markedly from proportional placement,
have a simple structure which makes them implementable
in practice.
On the methodological side, our analysis of matching performance
uses the theory of local weak limits of random
graphs, and highlights a novel characterization of matching
numbers in bipartite graphs, which may both be of independent
interest.
ACM SIGMETRICS’12 Preview Talk, joint work by Mathieu Leconte (Technicolor - INRIA), Marc Lelarge (INRIA - Ecole Normale SupÈrieure) and Laurent MassouliÈ (Technicolor)
In this work we address the problem of uncoordinated highway traffic. We first identify the main causes of the capacity drop namely high traffic demand and inadequate driver reaction. In the past, traffic and user behavior have been accurately described by cellular automata (CA) models. In this work we extend the CA model to deal with highway traffic fluctuations and jams. Specifically, the model incorporates the communication layer between vehicles. The model thus enables us to study the impact of inter-vehicular communications and in particular the delivery of critical and timely upstream traffic information on driver reaction. Based on the newly available traffic metrics, we propose an Advanced Driver Assistance System (ADAS) that suggests non-intuitive speed reduction in order to avoid the formation of the so called phantom jams. The results show that using such a system considerably increases the overall traffic flow, reduces travel time and avoid unnecessary slow downs.
nput-queued (IQ) switches are one of the reference
architectures for the design of high-speed packet switches. Classical
results in this field refer to the scenario in which the whole
switch transfers the packets in a synchronous fashion, in phase
with a sequence of fixed-size timeslots, selected to transport a
minimum-size packet. However, for switches with large number
of ports and high bandwidth, maintaining an accurate global
synchronization and transferring all the packets in a synchronous
fashion is becoming more and more challenging. Furthermore,
variable size packets (as in the traffic present in the Internet)
require rather complex segmentation and reassembly processes
and some switching capacity is lost due to partial filling of
timeslots. Thus, we consider a switch able to natively transfer
packets in an asynchronous fashion thanks to a simple and
distributed packet scheduler. We investigate the performance of
asynchronous IQ switches and show that, despite their simplicity,
their performance are comparable or even better than those of
synchronous switches. These partly unexpected results highlight
the great potentiality of the asynchronous approach for the design
of high-performance switches.
We present a first evaluation of the potential of an asynchronous distributed computation associated to the recently proposed approach, D-iteration: the D-iteration is a fluid diffusion based iterative method, which has the advantage of being natively distributive. It exploits a simple intuitive decomposition of the matrix-vector product as elementary operations of fluid diffusion associated to a new algebraic representation. We show through experiments on real datasets how much this approach can improve the computation efficiency when the parallelism is applied: with the proposed solution, when the computation is distributed over K virtual machines (PIDs), the memory size to be handled by each virtual machine decreases linearly with K and the computation speed increases almost linearly with K with a slope becoming closer to one when the number N of linear equations to be solved increases.
More than 2% of global carbon emissions can be attributed to ICT.
These carbon emissions are expected to increase by a compounded annual
growth of 6% until at least 2020. To reduce the level of carbon emissions
by reducing energy consumed, we seek for ICT equipment to both reduce its peak power
use and achieve energy-proportional operation - where the power consumed is proportional
to load and not capacity. Broadly speaking, there are three ways to reduce energy use
at the system level : by substituting consolidating and scheduling. As an example of
substitution, I will describe work in developing proxies for a range of network protocols
and applications including SIP phones. As an example of scheduling, I will describe
work with coalescing of packets in Energy Efficient Ethernet (EEE) and of HTTP requests
for hybrid web servers. I will end the talk with a discussion of some open problems and
next steps for further reducing energy consumption of both ICT and non-ICT systems focusing
on the role of networks.
We pose a few resource management problems in cellular networks, and propose solutions based on game theory. First, we investigate cooperation
among cellular service providers. We consider networks in which
communications involving different base stations do not interfere. If
providers jointly deploy and pool their resources, such as spectrum and
BSs, and agree to serve each others customers, their aggregate payoff
substantially increases. The potential of such cooperation can, however,
be realized only if the providers intelligently determine who they would
cooperate with, how they would deploy and share their resources, and how
they would share the aggregate payoff. We assume that the providers can
arbitrarily share the aggregate payoff . Then, developing a rational basis
for payoff sharing is imperative for the stability of the coalition. We
address these issues by formulating cooperation using the theory of
transferable utility coalitional games. I will briefly discuss two other
resource allocation problems in cellular networks which are in the context
of joint power control, BS association and BS placement in the uplinks of
cellular networks.
As we are browsing the web we are tracked by ad-networks, social networks and other third parties that build (supposedly) anonymized and incomplete records of users’ browsing history. This talk will first discuss the threats resulting from web-tracking by describing how these records can be de-anonymized. Then, we will overview existing solutions to prevent tracking with a focus on the work done in the W3C Tracking Protection group. We will also quickly overview privacy-preserving mechanisms that could be used to support behavioral targeting.
In the last part of the talk we will discuss another type of threat: photo tagging on social networks. We will quickly overview the proposed mechanism to protect privacy of pictured persons. Finally we will present Photo-Tagging Preference Enforcement (Photo-TaPE) which helps users to control how their pictures are published on the web.
In this paper, we present a software-based traffic classification engine running on off-the-shelf multi-core hardware, able to process in real-time aggregates of up to 15 million packet per second over a single 10Gbps interface.
This significant advance with respect to achievable classification rates with respect to the current state of the art is possible due to: (i) the use of PacketShading to efficiently move batches of packet headers from the NIC to the main CPU; (ii) the use of lightweight statistical classification techniques expoiting the size of the first few packets of a flow; (iii) a careful tuning of several aspects of the software application and of the hardware environment.
Using both real Tier-1 traces and synthetic traffic, we demonstrate that traffic classification of more than 10Gbps traffic aggregates is feasible with open-source sofware on common hardware.
We propose a novel manner to perform optical packet switching which does not require any prior electronic
signaling or header processing. The proposed packet switching
scheme can be implemented by means of a new optical device
able to handle packet contention in the optical domain by
implementing a first-come first-served policy. This simple device
is based exclusively on off-the-shelf components and can be used
to build optical packet switches, that we refer to as switch-
combiners. The latter can in turn be used to build all-optical
packet networks. Using variants of the Engset model, we analyse
the impact of network load on the performance of data traffic.
The results are illustrated on the practically interesting cases of
access and data center networks. Joint work with T. Bonald, D. Cuda and L. Noirie
20120404
2pm-3pm @
LINCS, Salle du conseil
Griffin, Timothy G.
( University of Cambridge )
Routing in Equilibrium
Some path problems cannot be modelled
using semirings because the associated
algebraic structure is not distributive. Rather
than attempting to compute globally optimal
paths with such structures, it may be sufficient
in some cases to find locally optimal paths —
paths that represent a stable local equilibrium.
For example, this is the type of routing system that
has evolved to connect Internet Service Providers
(ISPs) where link weights implement
bilateral commercial relationships between them.
Previous work has shown that routing equilibria can
be computed for some non-distributive algebras
using algorithms in the Bellman-Ford family.
However, no polynomial time bound was known
for such algorithms. In this talk, we show that
routing equilibria can be computed using
Dijkstra’s algorithm for one class of non-distributive
structures. This provides the first
polynomial time algorithm for computing locally
optimal solutions to path problems.
This is joint work with Joo Luis Sobrinho (http://www.lx.it.pt/ jls/)
presented at the 19th International Symposium on Mathematical
Theory of Networks and Systems (MTNS 2010).
You can find the paper here:
http://www.cl.cam.ac.uk/ tgg22/publications/routing_in_equilibrium_mtns_2010.pdf
Timothy G. Griffin is currently on sabbatical in Paris at
PPS/INRIA-pi-r2, http://www.pps.jussieu.fr
This file http://www.cl.cam.ac.uk/ tgg22/metarouting/rie-1.0.v
is a first cut at formalizing these results using Coq (http://coq.inria.fr)
with ssreflect (http://www.msr-inria.inria.fr/Projects/math-components)
The following link http://www.cl.cam.ac.uk/ tgg22/metarouting/rie-1.0.v contains a first cut at formalizing these results using
Coq (http://coq.inria.fr/) with ssreflect (http://www.msr-inria.inria.fr/Projects/math-components)
The Information-Centric Networking (ICN) paradigm is expected to be
one of the major innovations of the Future Internet.
An ICN system can be characterized by some key components like: (i)
the content-centric request/reply paradigm for data
distribution, (ii) route-by-name operations, and (iii) in-network
caching.
A crucial problem for all ICN solutions is to identify policies for
its experimentation in a real networking environment and its actual
deployment in operating network infrastructures.
Software Defined Networking (SDN), which is another topic currently
attracting increasing attention, may represent a valid solution to
such a problem.
SDN serates the forwarding operations from the network control
operations which may be defined dynamically through software. In this
way, it is extremely simple to introduce novel networking solutions,
in operating communication infrastructures, thus increasing network
"evolvability".
We are currently implementing and testing an ICN approach, named CONET
(COntent NETwork), in OFELIA – a pan-European SDN network platform,
based on the OpenFlow technology.
This talk will cover our motivations, our expectations and our
frustrations; in other words the lessons we are learning in such a
process.
The aim of this paper is to present the recently proposed fluid diffusion based algorithm in the general context of the matrix inversion problem associated to the Gauss-Seidel method. We explain the simple intuitions that are behind this diffusion method and how it can outperform existing methods.
Then we present some theoretical problems that are associated to this representation as open research problems. We also illustrate some connected problems such as the graph transformation and the PageRank problem.
In order to represent the set of transmitters simultaneously
accessing a wireless network using carrier sensing based
medium access protocols, one needs tractable point processes
satisfying certain exclusion rules. Such exclusion rules forbid the
use of Poisson point processes within this context. It has been
observed that Matern point processes, which have been advocated
in the past because of their exclusion based definition, are rather
conservative within this context. The present paper confirms that
it would be more appropriate to use the point processes induced
by the Random Sequential Algorithm in order to describe such
point patterns. It also shows that this point process is in fact
as tractable as the Matern model. The generating functional of
this point process is shown to be the solution of a differential
equation, which is the main new mathematical result of the
paper. In comparison, no equivalent result is known for the
Matern hard-core model. Using this differential equation, a new
heuristic method is proposed, which leads to simple bounds and
estimates for several important network performance metrics.
These bounds and estimates are evaluated by Monte Carlo
simulation. Joint work with Francois Baccelli
This paper studies the performance of Mobile Ad hoc Networks (MANETs) when the nodes, that form a
Poisson point process, selfishly choose their Medium Access Probability (MAP). We consider goodput and delay
as the performance metric that each node is interested in optimizing taking into account the transmission energy
costs. We introduce a pricing scheme based on the transmission energy requirements and compute the symmetric
Nash equilibria of the game in closed form. It is shown that by appropriately pricing the nodes, the selfish behavior
of the nodes can be used to achieve the social optimum at equilibrium. The Price of Anarchy is then analyzed for
these games. For the game with delay based utility, we bound the price of anarchy and study the effect of the price
factor. For the game with goodput based utility, it is shown that price of anarchy is infinite at the price factor that
achieves the global optima.,
joint work with E. Altman and F. Baccelli. Full paper is available at http://arxiv.org/pdf/1112.3741.pdf
Users today connect to the Internet everywhere - from home, work, airports, friend’s homes, and more. This paper characterizes how the performance of networked applications varies across networking environments. Using data from a few dozen end-hosts, we compare the distributions of RTTs and download rates across pairs of environments. We illustrate that for most users the performance difference is statistically significant. We contrast the influence of the application mix and environmental factors on these performance differences. Joint work with Oana Goga, Renata Teixeira, Jaideep Chandrashekar and Nina Taft
Content-centric networking (CCN) brings a
paradigm shift in the present Internet communication model
by addressing named-data instead of host locations. With
respect to TCP/IP, the transport model is connectionless with
a unique endpoint at the receiver, driving a retrieval process
natively point to multi-point. Another salient feature of CCN is
the possibility to embed storage capabilities into the network,
adding a new dimension to the transport problem. The focus
of this work is on the design of a receiver-driven Interest
control protocol for CCN, whose definition, to the best of our
knowledge, still lacks in literature. ICP realizes a window-based
Interest flow control, achieving full efficiency and fairness
under proper parameters setting. In this paper, we provide
an analytical characterization of average rate, expected data
transfer delay and queue dynamics in steady state on a single
and multi-bottleneck network topology. Our model accounts
for the impact of on-path caches. Protocol performance is also
assessed via packet-level simulations and design guidelines are
drawn from previous analysis. Joint work with G. Carofiglio and L.Muscariello
In this work, we study the caching performance of
Content Centric Networking (CCN), with special emphasis on the
size of individual CCN router caches. Specifically, we consider
several graph-related centrality metrics (e.g., betweenness, closeness,
stress, graph, eccentricity and degree centralities) to allocate
content store space heterogeneously across the CCN network, and
contrast the performance to that of an homogeneous allocation.
To gather relevant results, we study CCN caching performance
under large cache sizes (individual content stores of 10 GB),
realistic topologies (up to 60 nodes), a YouTube-like Internet
catalog (10^8 files for 1PB video data). A thorough simulation
campaign allow us to conclude that (i) , the gain brought by
content store size heterogeneity is very limited, and that (ii) the
simplest metric, namely degree centrality, already proves to be
a "sufficiently good" allocation criterion.
On the one hand, this implies rather simple rules of thumb for
the content store sizing (e.g., "if you add a line card to a CCN
router, add some content store space as well"). On the other
hand, we point out that technological constraints, such as linespeed
operation requirement, may however limit the applicability
of degree-based content store allocation. Joint work with D. Rossi
We introduce a rate-based congestion control mechanism for Content-Centric Networking (CCN). It builds on the fact that one Interest retrieves at most one Data packet. Congestion can occur when aggregate conversations arrive in excess and fill up the transmission queue of a CCN router. We compute the available capacity of each CCN router in a distributed way in order to shape their conversations Interest rate and therefore, adjust dynamically their Data rate and transmission buffer occupancy. We demonstrate the convergence properties of this Hop-by-hop Interest Shaping mechanism
(HoBHIS) and provide a performance analysis based on various scenarios
using our ns-2 simulation environment. Joint work with Serge Fdida.
One of the defining properties of small worlds is the prevalence of short paths connecting node pairs. Unfortunately, as a result the usual notion of distance is not particularly helpful in distinguishing neighborhoods in such graphs. This is the case, for example, when analyzing the interdomain routing system of the Internet. We describe a motivating problem that requires a finer-grained notion of distance. The problem is quite simple to state: how can any given network operator in the Internet determine which paths pass through its network. Surprisingly, the nature of Internet routing makes this question rather hard to answer. To address this problem, we define a new distance metric on graph nodes. This metric has useful and interesting properties: it is easy to compute and understand, it can be used to sharply distinguish neighborhoods in networks, and it remains useful even in small-world networks. We show how we use this metric to address our motivating problem, and more generally how it can be used for visualization and dimensionality reduction of complex networks.
Abstract : It has previously been shown that the combined use of fair queuing and admission control would allow
the Internet to provide satisfactory quality of service for both streaming and elastic flows without explicitly
identifying traffic classes. In this paper we discuss the design of the required measurement based
admission control (MBAC) scheme. The context is different to that of previous work on MBAC in that
there is no prior knowledge of flow characteristics and there is a twofold objective: to maintain
adequate throughput for elastic flows and to ensure low packet latency for any flow whose peak rate is
less than a given threshold. In this talk, we consider the second objective assuming realistically that most
elastic and streaming flows are rate limited. We introduce a MBAC algorithm and evaluate its
performance by simulation under different stationary traffic mixes and in a flash crowd scenario. The
algorithm is shown to offer a satisfactory compromise between flow performance and link utilization. Joint work with Sara Oueslati, James Roberts,
23rd International Teletraffic Congress (ITC 2011), San Francisco (CA), Sep 6-8, 2011
BitTorrent, one of the most widespread used P2P application for file-sharing, recently got rid of TCP by introducing an application-level congestion control protocol named uTP. The aim of this new protocol is to efficiently use the available link capacity, while minimizing its interference with the rest of user traffic (e.g., Web, VoIP and gaming) sharing the same access bottleneck.
In this paper we perform an experimental study of the impact of uTP on the torrent completion time, the metric that better captures the user experience. We run BitTorrent applications in a flash crowd scenario over a dedicated cluster platform, under both homogeneous and heterogeneous swarm population. Experiments show that an all-uTP swarms have shorter torrent download time with respect to all-TCP swarms. Interestingly, at the same time, we observe that even shorter completion times can be achieved under mixtures of TCP and uTP traffic, as in the default BitTorrent settings. Joint work with D. Rossi (Telecom ParisTech), A. Rao (INRIA) and A. Legout (INRIA)
Network measurement practitioners increasingly focus their interest on understanding and debugging home networks. The Universal Plug and Play (UPnP) technology holds promise as a highly efficient way to collect and leverage measurement data and configuration settings available from UPnP-enabled devices found in home networks. Unfortunately, UPnP proves less available and reliable than one would hope. In this paper, we explore the usability of UPnP as a means to measure and characterize home networks. We use data from 120,000 homes, collected with the HomeNet Profiler and Netalyzr troubleshooting suites. Our re- sults show that in the majority of homes we could not collect any UPnP data at all, and when we could, the results were frequently inaccurate or simply wrong. Whenever UPnP-supplied data proved accurate, however, we demonstrate that UPnP provides an array of useful measurement techniques for inferring home network traffic and losses, for identifying home gateway models with configuration or implementation issues, and for obtaining ground truth on access link capacity.
Joint work with Renata Teixeira (UPMC Sorbonne Universites and CNRS), Martin May (Technicolor), and Christian Kreibich (ICSI)
The spread of residential broadband Internet access is raising the question of how to measure Internet speed. We argue that available bandwidth is a key metric of access link speed. Unfortunately, the performance of available bandwidth estimation tools has rarely been tested from hosts connected to residential networks. This paper compares the accuracy and overhead of state-of-the-art available bandwidth estimation tools from hosts connected to commercial ADSL and cable networks. Our results show that, when using default settings, some tools underestimate the available bandwidth by more than 60%. We demonstrate using controlled testbeds that this happens because current home gateways have a limited packet forwarding rate. Joint work with Renata Teixeira (UPMC Sorbonne Universite and CNRS, LIP6, Paris, France)
Our work compares local and wide-area traffic from end-hosts connected to different home and work networks. We base our analysis on network and application traces collected from 47 end-hosts for at least one week. We compare traffic patterns in terms of number of connections, bytes, duration, and applications. Not surprisingly, wide-area traffic dominates local traffic for most users. Local connections are often shorter and smaller than Internet connections. Moreover, we find that name services (DNS) and network file systems are the most common local applications, whereas web surfing and P2P, which are the most popular applications in the wide-area, are not significant locally. \{Joint work with Fabian Schneider (NEC Laboratories Europe, Heidelberg, Germany), Renata Teixeira (UPMC Sorbonne Universite and CNRS, LIP6, Paris, France)
We propose and analyze a class of distributed algorithms performing the joint optimization of radio resources in heterogeneous cellular networks made of a juxtaposition of macro and small cells. We see that within this context, it is essential to use algorithms able to simultaneously solve the problems of channel selection, user association and power control. In such networks, the unpredictability of the cell and user patterns also requires self-optimized schemes. The proposed solution is inspired from statistical physics and is based on Gibbs sampler. It can be implemented in a distributed way and nevertheless achieves minimal system-wide potential delay. Simulation results have shown its effectiveness.
Today’s Internet architecture, nearly 40 years old now, is grounded in a model of host-to-host communication. More recently, a number of researchers have begun to focus on Content Networking - a model in which host-to-content (rather than host-to-host) interaction is the norm. Here, content distribution and retrieval, rather than host-to-host packet delivery, is the core function supported in each and every network node. A central component of proposals for such content delivery is the routing of content to requestors through a large-scale interconnected network of caches.
In this talk we focus on this cache network. We begin with a quick overview of Content Networking. We then describe Breadcrumbs - a simple content caching, location, and routing system that uses a small amount of information regarding cache history/routing in a simple, best-effort approach towards caching. In the second part of this talk we consider the broad challenge of analyzing networks of interconnected caches.We describe an iterative fixed-point algorithm for approximating cache network performance, evaluate the accuracy of the approximation, and identify the sources of approximation error. We also consider the steady state behavior of cache networks. We demonstrate that certain cache networks are non-ergodic in that their steady-state characterization depends on the initial state of the system. We describe sufficient conditions (based on topology, admission control, and cache replacement policy) for ergodicity and ergodicity equivalence classes among policies. Last, we describe current work on developing a network calculus for cache network flows.
Joint work with Elisha Rosensweig, Daniel Menasche, Don Towsley
In this talk, I will present our recent work on top-k search in social tagging systems, also known as folksonomies (popular examples include Del.icio.us, StumbleUpon or Flickr). The general setting is the following:
- Users form a weighted social network, which may reflect friendship, similarity, trust, etc.
- Items from a public pool of items (e.g., URLs, blogs, photos, documents) are tagged by users with keywords, driven by various motivations (description, classification, to facilitate later retrieval, sociality).
- Users search for items having certain tags.
Going beyond the classic search paradigm where data is decoupled from the users querying it, users can now act both as producers and seekers of information. Therefore, finding the most relevant items in response to a query should be done in a network-aware manner:
items tagged by users who are closer (more similar) to the seeker should be given more weight than items tagged by distant users. We propose an algorithm that has the potential to scale to current applications. We describe how a key aspect of the problem,
which is accessing the closest or most relevant users for a given seeker, can be done on-the-fly (without any pre-computations) for several possible choices - arguably the most natural ones - of proximity computation in a social network. Based on this, our top-k algorithm
is sound and complete, and is instance optimal in the case when the search relies exclusively on the social weight of tagging actions. To further improve response time, we then consider approximate techniques. Extensive experiments on real-world data show that these can
bring significant benefit, without sacrificing precision. New issues and directions for future research will also be discussed if time allows.
Network testbeds strongly rely on virtualization that
allows the simultaneous execution of multiple protocol stacks but
also increases the management and control tasks. This talk
presents a system to control and manage virtual networks based
on the Xen platform. The goal of the proposed system is to
assist network administrators to perform decision making in this
challenging virtualized environment. The system management
and control tasks consist of defining virtual networks, turning
on, turning off, migrating virtual routers, and monitoring the
virtual networks within few mouse clicks thanks to a user-friendly
graphical interface. The administrator can also perform high-level
decisions, such as redefining the virtual network topology
by using the plane-separation and loss-free live migration functionality,
or saving energy by shutting down physical routers. Performance
tests assure the system has low response time.
Miguel Elias Mitre Campista was born in Rio de Janeiro, Brazil, on May 8th, 1980. He received the Telecommunications Engineer degree from the Fluminense Federal University (UFF), Rio de Janeiro, Brazil, in 2003 and the M.Sc. and D.Sc. degrees in Electrical Engineering from the Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil, in 2005 and 2008, respectively. Currently, Miguel is with GTA Laboratory in COPPE/UFRJ.
His major research interests are in multihop wireless networks, quality of service, wireless routing, wireless mesh networks, and home networks.
The move towards a future Internet is today hindered by the mismatch between the host-oriented model, at the
foundation of the current network architecture, and the dominant content-oriented usage, centered on data dissemination
and retrieval. Content-centric networking (CCN) brings a paradigm shift in the present Internet communication model by
addressing named-data instead of host locations.
Important features of such networks are the availability of built-in network storage and of receiver-driven chunk level
transport, whose interaction significantly impacts overall system and user performance.
In the talk, I will focus on the performance evaluation of CCN networks and present an analytical model of bandwidth
and storage sharing under fairly general assumption on total demand, topology, content popularity and limited network resources.
Further, an overview of the ongoing activities and the main research challenges in this area will be provided.
We consider Fleming Viot processes having the following dynamics: N particles move independently according to the dynamics of a subcritical branching process until they hit 0, at which point, they instantaneously and uniformly choose the position of one of the other particles. We first establish a coupling between the FV processes (associated to any one-dimensional dynamics) and multitype branching processes. This allows us to prove convergence of scaled version of the FV processes and ergodicity for fixed N. Using large deviations estimate for subcritical branching processes, this coupling further allows to obtain useful drift inequalities for the maximum of the Fleming Viot process. These inequalities imply in turn tightness of the family of empirical measures under measure of the branching process when N tends to infinity. the stationary measure of the FV process. Finally, we prove a selection principle: the empirical measures converge to the extremal quasi-stationary measure of the branching process when N tends to infinity.
As the number of scientists and scientific publications are increasing fast, an eliminatory preliminary phase is often necessary to filter out the possible candidates for a scientific position or a scientific prize. A good method to measure the achievement and the impact of an author/a publication is therefore needed. In the talk, we will develop a PageRank-like measure in the scientific world by discussing about the possible extensions and necessary constraints in order to keep the method the most reliable possible.
For timed languages, we define size measures: volume for languages with a
fixed finite number of events, and entropy (growth rate) as asymptotic measure
for an unbounded number of events. These measures can be used for quantitative
comparison of languages, and the entropy can be viewed as information contents
of a timed language.
In the case of languages accepted by deterministic timed automata, we give
exact formulas for computing volumes, of which we deduce a characterization of
entropy and propose several methods to compute it.
One method involves functional analysis: we characterize the entropy as the
logarithm of the spectral radius of the positive integral operator which
transforms volume functions and use this characterization.
Another method involves discretizing the automaton and approximating its
entropy by the entropy of a finite automaton.
Finally we show, in the case of automata with punctual guards, how defects of
dimension in the polyhedra of possible timings can be dealt with.
We propose a new model for peer-to-peer networking which takes the
network bottlenecks into account beyond the access. This model allows
one to cope with the fact that distant peers often have a smaller rate
than nearby peers. We show that the spatial point process describing
peers in their steady state exhibits an interesting repulsion
phenomenon. We study the implications of this phenomenon by analyzing
two asymptotic regimes of the peer-to-peer network: the fluid regime
and the hard–core regime. We get closed form expressions for the mean
(and in some cases the law) of the peer latency and the download rate
obtained by a peer as well as for the spatial density of peers in the
steady state of each regime. The analytical results are based on a mix
of mathematical analysis and dimensional analysis and have important
design implications.
This is a joint work with Francois Baccelli and Ilkka Norros
Although the Internet has been designed as a network for the pairwise
communication between end hosts, the current traffic-mix reveals that
it is currently used for the massive distribution of information.
Based on a host-centric architecture, the Internet serves as a
communication infrastructure interconnecting requests to the
information itself. In view of this model mismatch, the networking
community has started to investigate architectures for a Future
Internet many of which revolve around information-centrism. In this
talk, we will provide an overview of the emerging Information-Centric
Networking (ICN) paradigm, highlighting its main features and
discussing the ways it offers a promising alternative to the current
Internet architecture. We will take a close look at the current ICN
research efforts, including our work in the field, and point out their
commonalities and key differences. Finally, we will identify key
research challenges in the area, fostering further discussion on the topic.
Konstantinos Katsaros received his B.Sc., M.Sc. and Ph.D. degrees in
2003, 2005 and 2010 respectively from the Department of Computer Science,
Athens University of Economics and Business, Greece. His PhD thesis was on
content distribution and mobility support in the context of the
Information-Centric Networking (ICN) paradigm. His current research is
also in the area of ICN, focusing on scalable information discovery and
name resolution, policy compliant routing, packet-level caching and
multipath routing. He has also worked in the areas of multicast and
broadcast service provision over next generation cellular networks, mobile
grid computing and cognitive radio.
Voting systems allow competing entities to decide among different options. In order to ensure fairness between the competing entities, a strong requirement is to avoid manipulability by voters. Unfortunately, strong theoretical results show that, unless using some "degenerated" and a priori non-acceptable voting systems such as the dictatorial ones, any other voting system is susceptible to be manipulated by a single voter! However, very little is know about how much a voting system is manipulable. We evaluate different voting systems by quantifying their probability of manipulability on various kinds of voter populations. The results are very general and can be applied in any context where voting systems can be used.
Antonio Kung has 30-year experience in embedded systems.
He was initially involved in the development of real-time kernels, before co-founding Trialog in 1987, where he now serves as CTO.
He heads the company product development (kernels, protocols, tools) as well as collaborative projects with a focus on embedded systems, security privacy trust and ICT for ageing.
He is involved in the promotion of initiatives towards common platforms and interoperability.
He holds a Master degree from Harvard University and an Engineering degree from Ecole Centrale Paris.
Resilience analysis in packet-based communication networks quantifies the risk of link overload due
to rerouted traffic and the risk of disconnectivity.
Proactive optimization of routing and rerouting
may reduce the first risk, improvement of the network topology the second one.
Recently, IP fast
reroute mechanisms have been suggested by the IETF. Loop-free alternates (LFAs) are simple, but
they may not cover all single element failures.
The IETF has recently defined Pre-Congestion Notification (PCN) for Differentiated Services networks.
It uses simple load-dependent packet re-marking to communicate load conditions to edge. This
information is used for admission control and flow termination.
The latter is useful to remove
overload that occurs in spite of admission control due unexpected events.
However, termination can
be avoided if admissible rate thresholds are set low enough, which is the principle of resilient
admission control.
Today’s access networks suffer from a minority of heavy users who are responsible for most traffic
and compromise the quality of experience for a majority of light users.
Some ISPs rate-limit the user
access, others use deep packet inspection to classify and downgrade some traffic which violates
network neutrality.
To tackle that problem, the IETF defines Congestion Exposure (ConEx). It makes
congestion visible to any IP device along a flow’s path.
This information may be used to throttle the
user access to achieve per-user fairness rather than per-flow fairness, to improve traffic engineering,
and to enhance SLAs.
Michael Menth is a full professor at the Department of Computer Science at the University of
Tuebingen/Germany and head of the Communication Networks chair.
He received a Diploma and
PhD degree in 1998 and 2004 from the University of Wuerzburg/Germany.
Prior he was studying
computer science at the University of Texas at Austin and worked at the University of Ulm/Germany.
His special interests are performance analysis and optimization of communication networks,
resource management, resilience issues, and Future Internet. He holds numerous patents and
received various scientific awards for innovative work.
20111019
@
LINCS, 23 av Italie, Salle de Conseil
Taht, Dave Bufferbloat
Bufferbloat - Identification, analysis, tools for analyzing overly deep buffering across the (mostly wireless) internet, with some potential for solutions. For an introduction to the bufferbloat problem, see Jim Gettys’
talk http://gettys.wordpress.com/2011/06/02/google-techtalk-video-is-up/
This talks reportes latest update of work in progress, more information is available online at http://lwn.net/Articles/458625/ or http://www.bufferbloat.net/
This seminar presents the research activities on Smart Grids of the Information Systems and Sciences for Energy (ISS4E) laboratory co-founded by Professors Rosenberg and Keshav at University of Waterloo. After a brief introduction on smart grids and their similarities with the Internet, two research projects will be presented. The first is on dimensioning transformers and storage using probabilistic analysis. The second one, on demand response, proposes a solution to take advantage of the elasticity inherent to most of the major home appliances. All these projects are conducted in collaboration with Prof. Keshav and graduate students.
Catherine Rosenberg is a Professor in Electrical and Computer Engineering at the University of Waterloo. Since June 2010, she holds the Canada Research Chair in the Future Internet. She started her career in ALCATEL, France and then at AT&T Bell Labs., USA. From 1988-1996, she was a faculty member at the Department of Electrical and Computer Engineering, Ecole Polytechnique, Montreal, Canada. In 1996, she joined Nortel Networks in the UK where she created and headed the R&D Department in Broadband Satellite Networking. In August 1999, Dr. Rosenberg became a Professor in the School of Electrical and Computer Engineering at Purdue University where she co-founded in May 2002 the Center for Wireless Systems and Applications (CWSA). She joined University of Waterloo on Sept 1st, 2004 as the Chair of the Department of Electrical and Computer Engineering for a three-year term.
Catherine Rosenberg is on the Scientific Advisory Board of France-Telecom and is a Fellow of the IEEE.
A fundamental challenge of trustworthy computing is to develop a systematic and yet
practical/usable approach on determining whether or not, and how much a piece of information (e.g.,
software program or information content) should be trusted. The focus of this talk is our trust
management architecture, called DSL (Davis Social Links) based on social informatics, i.e.,
information about human social relationships and the interactions based on those relationships.
Under the DSL architecture, we will discuss how to enhance the trustworthiness of distributed
applications running on top of today\u2019s Internet, and furthermore, how to re-design a brand new
trustworthy Internet architecture based on social informatics. The speaker will perform some small
demos during his talk.
It is expected that circuit switching (CS) will play an important
major role in future optical networks. CS normally does not require
buffering which is very costly in the optical domain. If the traffic
on a CS network is well managed CS networks can guarantee quality of
service (QoS) to customers in a way that can even lead to efficient
link utilization and low consumption of energy per bit. In the core
Internet, where traffic is heavily multiplexed, it is easier to
achieve high utilization and therefore the role of CS at the core is
clearly important. However, CS can also lead to a green and efficient
operation end-to-end for large bursts of data. Accurate, robust and
scalable blocking probability evaluation is an important element in CS
traffic management. We consider an optical network that uses various
circuit-switching based technologies such as OCS and OFS. We model it
as two-priority circuit-switched network with non-hierarchical
alternate routing. We evaluate the blocking probability using
algorithms based on the Erlang Fixed-point Approximation (EFPA) and
the Overflow Priority Classification Approximation (OPCA). For a
particular example of a 6-node fully meshed network with alternate
routing, we compare numerically between OPCA over EFPA and discuss
traffic implications.
Moshe Zukerman received his B.Sc. in Industrial Engineering
and Management and his M.Sc. in Operation Research from
Technion-Israel Institute of Technology and a Ph.D. degree in
Engineering from The University of California Los Angeles in 1985.
During 1986-1997 he served in Telstra Research Laboratories (TRL).
During 1997-2008 he was with The University of Melbourne. In Dec 2008,
he joined City University of Hong Kong where he is a Chair Professor
of Information Engineering. He has served on the editorial boards of
various journals such as IEEE JSAC, IEEE/ACM Transactions on
Networking, IEEE Communications Magazine, Computer Networks and
Computer Communications. Prof. Zukerman has over 300 publications in
scientific journals and conference proceedings, has been awarded
several national and international patents, two conference best paper
awards and honorary Professorships at CCNU, Wuhan; CityU, Hong Kong;
and BJTU, Beijing. He is a Fellow of the IEEE and has served as a
member and Chair of the IEEE Koji Kobayashi Computers and
Communications Award Committee.
Since the early 2000s, a real attention is paid to
physical impairments arising in large-scale optical networks. One
cost-effective solution to cope with transmission impairments is to
deploy 3R (re-amplifying, re-shaping, and re-timing) regenerators
in a limited number of network nodes (i.e., translucent networks).
Taking into account the simultaneous effect of four transmission
impairments (amplified spontaneous emission, chromatic dispersion,
polarization mode dispersion, and nonlinear phase shift), we
propose a novel exact approach for impairment-aware network
planning. In contrast with previous works, we investigate the
problem under pre-planed dynamic traffic. Our proposal takes
advantage of the dynamics of the traffic pattern so that regeneration
resources may be shared among non-concurrent requests.
Given a network topology and a set of pre-planned requests, we
target the minimum number of regenerators or/and regeneration
sites. Thanks to an ILP formulation of this problem, we outline through
the obtained numerical results the mutual impact between the
time-correlation of the requests and the level of regenerators’ concentration. Joint work with E. Doumith and S. Al Zahr
Mixes are relay nodes that accept packets arriving from multiple sources
and release them after variable delays to prevent an eavesdropper from
associating outgoing packets to their sources. We assume that each mix
has a hard latency constraint. Using an entropy-based measure to
quantify anonymity, we analyze the anonymity provided by networks of
such latency-constrained mixes. Our results are of most interest under
light traffic conditions. A general upper bound is presented that bounds
the anonymity of a single-destination mix network in terms of a linear
combination of the anonymity of two-stage networks. By using a specific
mixing strategy, a lower bound is provided on the light traffic
derivative of the anonymity of single-destination mix networks. The
light traffic derivative of the upper bound coincides with the lower
bound for the case of mix-cascades (linear single-destination mix
networks). Co-organized avec Aslan Tchamkerten de COMELEC.
Les reseaux phylogenetiques generalisent le modele de l’arbre pour
decrire l’evolution des especes, en permettant e des aretes entre les
branches de l’arbre d’exprimer des echanges de materiel genetique
entre especes coexistantes. De nombreuses approches combinatoires -
fondees sur la manipulation d’ensembles finis d’objets mathematiques -
ont ete coneues pour reconstruire ces reseaux e partir de donnees
extraites de plusieurs arbres de genes contradictoires. Elles se
divisent en plusieurs categories selon le type de donnees en entrees
(triplets, quadruplets, clades ou bipartitions) et les restrictions de
structure sur les reseaux reconstruits.
Je presenterai plusieurs problemes d’optimisation combinatoire sur les
reseaux phylogenetiques, concernant leur reconstruction e partir de
triplets ou de clades. Je decrirai des methodes de resolution basees
sur des algorithmes exacts (de complexite parametree notamment) ou des
heuristiques.
Les reseaux sont largement utilises dans de nombreux domaines scientifiques
afin de representer les interactions entre objets d’interet. Ainsi, en
biologie, les reseaux de regulation s’appliquent e decrire les mecanismes
de regulation des genes, e partir de facteurs de transcription, tandis que
les reseaux metaboliques permettent de representer des voies de reactions
biochimiques. En sciences sociales, ils sont couramment utilises pour
representer les interactions entre individus. Dans ce contexte, de
nombreuses methodes non-supervisees de clustering ont ete developpees afin
d’extraire des informations, e partir de la topologie des reseaux. La
plupart d’entre elles partitionne les noeuds dans des classes disjointes,
en fonction de leurs profils de connection. Recemment, des etudes ont mis
en evidence les limites de ces techniques. En effet, elles ont montre qu’un
grand nombre de reseaux "reels" contenaient des noeuds connus pour
appartenir e plusieurs groupes simultanement. Pour repondre e ce probleme,
nous proposons l’Overlapping Stochastic Block Model (OSBM). Cette approche
autorise les noeuds e appartenir e plus d’une classe et generalise le tres
connu Stochastic Block Model, sous certaines hypotheses. Nous montrons que
le modele est identifiable dans des classes d’equivalence et nous proposons
un algorithme d’inference base sur des techniques variationnelles globales
et locales. Finalement, en utilisant des donnees simulees et reelles, nous
comparons nos travaux avec d’autres approches.
Un jeu est dit e observation partielle (ou information imparfaite) si les
joueurs n’ont pas acces e l’histoire entiere de la partie lors des prises
de decision. Parmi les jeux bien connus, citons e titre d’exemple les jeux
de cartes - poker, tarot- ou la bataille navale, ainsi que les versions
dites fantômes de jeux classiques : go fantôme, kriegspiel (echecs
fantômes), morpion fantôme.
Une problematique fondamentale pour ces jeux est le calcul d’equilibres
et de strategies, que ce soit a priori ou de faeon "online" (au cours
du jeu). Au-dele du cadre des jeux proprement dit, ces questions sont
pertinentes pour l’optimisation en environnement incertain et adversarial:
reseaux, gestions de stocks ou de portefeuilles. La complexite inherente
e ces problemes en font e l’heure actuelle un des defis majeurs de
l’intelligence artificielle.
Nous presenterons un etat de l’art et en particulier des algorithmes
stochastiques de minimisation de regret - methodes dites de bandits
manchots et algorithmes de Monte Carlo Tree Search - ainsi qu’une methode
que nous avons developpee pour les adapter au cadre de l’observation
partielle.
Le concept de "graphe quantique" a recemment ete introduit pour
generaliser celui de graphe aleatoire. Dans ce modele, chaque lien
du graphe correspond e un etat quantique bipartite non maximalement
correle et les noeuds ont la capacite d’appliquer des operations
locales e leur systeme quantique ainsi que de communiquer
classiquement dans le but d’etablir des correlations quantiques
maximales. Ce modele conduit e l’apparition de proprietes inattendues
en ce qui concerne la percolation (Acín et al, Nature Physics 2007)
ou l’apparition de sous-graphes donnes dans un graphe de grande
taille (Perseguers et al, Nature Physics 2010).
Dans cet expose, on s’interesse e un nouvel objet, intermediaire entre
les graphes aleatoires et les graphes quantiques : les « graphes de
secrets ». Dans ce modele, chaque paire de noeuds reeoit un bit secret
identique, mais biaise. Les noeuds sont autorises e agir localement
sur leurs donnees et e communiquer de maniere publique dans le but
d’etablir de nouvelles correlations secretes constituees de bits non
biaises.
On montrera que ces graphes de secrets partagent de nombreuses
proprietes communes avec les graphes quantiques, et donc que ces
proprietes ne sont pas intrinsequement d?origine quantique.
etant donne un code binaire C, on appelle temoin d’un mot de code c par
rapport e C tout ensemble d’indices W tels que la restriction du mot c
e W distingue c de tout autre mot de code. On peut se demander quel est
le cardinal maximal d’un code de longueur donnee tel que chaque mot
possede un temoin de longeur fixee. Cette question est ouverte. Nous
presenterons des constructions de codes avec un grand cardinal et nous
montrerons dans certains cas que leur taille est optimale. Nous
utiliserons pour cela le nombre theta de Lovasz pour les graphes et une
technique d’optimisation inspiree de la methode de la programmation
lineaire de Delsarte combinee e des arguments de reduction par le
groupe de symetrie.
20110505-a
14h @
amphi Saphir
Couvreur, Alain
Introduits par Gallager dans les annees 60, puis redecouverts durant les
annees 90, les codes LDPC (Low-Density Parity-Check), font e l’heure
actuelle partie des codes les plus utilises en pratique. Pour les produire,
on distingue deux approches. La premiere consiste e generer la matrice de
parite du code aleatoirement et la seconde e utiliser des matrices creuses
issues d’objets combinatoires (designs, structures d’incidence).
Dans cet expose, je presenterai des structures d’incidence nouvelles
obtenues e partir de relations d’incidence entre drapeaux (la donnee d’un
point et d’une droite le contenant) et coniques du plan affine.
On s’interesse ensuite aux codes LDPC provenant de ces structures
d’incidence, dont certaines caracteristiques (distance minimale, maille du
graphe de Tanner, nombre de cycles minimaux) peuvent etre determinees par
des methodes geometriques. On terminera par la presentation de simulations
des performances de ces codes sur le canal Gaussien.
Dans cet expose un modele algebrique pour la caracterisation des codes
correcteurs systematiques est presentee. L’interet pour cette famille
derive de la volonte de generaliser les codes lineaires afin de
rechercher des codes qui soient optimaux du point de vue de la
distance. En fait, il est possible demontrer que tout code lineaire
est equivalent e un code systematique, mais il existe des codes
systematiques non-lineaires e distance plus grande de tout lineaire
qui partage les m?me parametres n (longueur) k (dimension). A la base
de cette approche, le fait que tout code systematique est en
correspondance avec la base de Groebner reduites de son ideal
d’annulation. Grâce e cette correspondance, nous decrivons un
algorithme qui, etant donnes n,k,d entiers, fournit la caracterisation
des codes systematiques avec parametres n,k et distance au moins d.
Le point central de l’algorithme est le calcul de la base de Groebner
d’un certain ideal B(n,k,t) qui est invariant sous l’action d’un groupe
de permuta- tions et presente de proprietes d’inclusions dans d’autres
ideaux du meme type (par ex. dans B(n+1,k,t+1) et B(n+1,k+1,t)). Avec
des techniques similaires, il est possible aussi de formuler un
algorithme pour le calcul de la distribution des distances d’un code
systematique et une nouvelle borne pour la distance de ces codes.
Les volcans d’isogenies sont des graphes dont les noeuds sont des courbes
elliptiques et les aretes sont des l-isogenies entre les courbes.
Ces structures, introduites par Kohel dans le but de calculer l’anneau
d’endomorphismes d’une courbe, ont plusieurs applications en cryptographie:
dans les algorithmes de comptage des points, dans la constructions de
certaines fonctions de hachage etc.
Des algorithmes pour le parcours de ces graphes ont ete proposes par Kohel
(1996) et Fouquet et Morain (2001). Cependant, jusqu’e present, il n’etait
pas possible de predire la direction d’un pas sur le volcan; de fait,
un grand nombre de pas successifs etait necessaire avant de determiner la
direction prise.
Je presenterai une methode qui permet de s’orienter dans ces graphes,
lorsque la cardinalite des courbes elliptiques est connue. Cette methode,
basee sur le calcul du couplage de Tate, est tres efficace et donne, dans
beaucoup de cas, des algorithmes plus rapides que les methodes existantes
pour le parcours des volcans d’isogenies.
L’Aide MultiCritere e la Decision (AMCD) a pour but de determiner parmi
un ensemble d’alternatives ou options celle qui est la meilleure, sur base
de plusieurs criteres souvent contradictoires. Ce processus debute par un
recueil d’information preferentielle aupres du decideur. En fonction de la
nature du probleme e resoudre, un modele d’aide e la decision est construit
e partir de cette information preferentielle.
Nous presentons un nouveau modele de representation des preferences de type
ordinale ou cardinale (incluant des intensites de preferences du decideur),
modele qui presente l’avantage de prendre en compte les interactions
pouvant exister entre les criteres de decision. Notre approche est basee
sur l’utilisation de l’integrale de Choquet 2-additive comme fonction
d’agregation et sur une gestion des incoherences e travers des techniques
de programmation lineaire. La methode obtenue est une generalisation de
l’approche interactive MACBETH, d’où son nom MACBETH 2-additif.
Il existe plusieurs faeons non equivalentes de generaliser l’acyclicite
des graphes aux hypergraphes. Nous allons presenter les quatre notions
les plus repandues : la Berge, gamma, beta et alpha-acyclicite. Pour
cela, nous donnerons des caracterisations de natures diverses
(algorithmique, en termes d’arbre de jointure ou tout simplement
d’absence de certains types de cycles). Puis nous verrons comment,
dans differents contextes (logique, methodes de decomposition,
complexite de problemes d’optimisation), certaines de ces notions sont
plus adaptees que d’autres.
L’Analyse de Concepts Formels (ACF) est une methode de classification
conceptuelle qui s’appuie sur la theorie des treillis pour mettre en
evidence les structures et relations sous-jacentes e des donnees
binaires. L’ACF a servi de base pour de nombreuses approches en analyse
et fouille de donnees, apprentissage, aide e la decision, etc. Ces
approches sont souvent penalisees par le processus de transformation
des donnees souvent complexes (non binaires). Dans cet expose, je
presente l’ACF par similarite (ACFS) qui s’appuie sur des connaissances
de domaine pour etendre l’ACF e des donnees complexes. Les connaissances
de domaines, exprimees sous differentes formes, permettent de calculer
la similarite entre les donnees complexes et de deduire leurs structures
conceptuelles sous-jacentes. L’aspect original de l’ACFS est la
possiblite d’obtenir, pour un meme jeu de donnee, des classifications
de differents niveaux de granularite. Cet aspect est particulierement
interessant pour l’exploration interactive et la fouille progressive
des donnees et permet, entre autres, d’effectuer des zooms avant/arriere
selon le besoin en precision ou en abstraction qu’on souhaite avoir sur
ces donnees. La deuxieme partie de cet expose sera consacree e
l’utilisation de l’ACFS pour l’exploration progressive des donnees dans
un contexte de decouverte de ressources biologiques, pour l’aide e la
decision en agronomie, pour la composition de services Web ainsi que
pour la visualisation decisionnelle dans un contexte Business
Intelligence.
Les exploitants de systemes complexes industriels sont amenes, entre
autres missions, e anticiper la faisabilite de nouvelles gammes de
fonctionnement ou e faire evoluer des processus en fonction de nouvelles
reglementations. Pour ce faire, les metiers de l’ingenierie et de R&D
sont interroges et mettent en oeuvre des outils de modelisation et
simulation numerique.
Grees au sein d’une demarche projet communement adoptee par l’industrie,
differents corps de metiers doivent participer de maniere conjointe e un
processus decisionnel avec une variete de points de vue e la source
d’incomprehensions necessaires, introduisant des mises au point
impactant parfois significativement le delai de realisation des etudes.
La visualisation scientifique 3D interactive est un des vecteurs les
plus efficaces de partage de l’information et pourrait apporter une aide
precieuse aux projets pour creer un referentiel commun entre differents
metiers. Force est de constater qu’un tel outil demeure cependant
cantonner au sein des bureaux d’etude du fait de son coût et de sa
complexite materielle et logicielle faute de disposer d’un environnement
partage, ou integre au sein meme de l’entreprise, de visualisation
interactive et collaborative.
Cette presentation abordera les enjeux de la mise en place d’une telle
communication mediatisee autant au travers de son insertion dans des
processus metiers deje en place que dans ses contraintes de performances
techniques et d’utililisabilite.
Une demarche complete integrant une reponse technologique centree
utilisateurs et coneue au cours du projet de Pôle de Competitivite
CARRIOCAS sera presentee ainsi que des axes d’investigation e moyen
terme tirant partie de la convergence de recentes avancees
technologiques Internet et de visualisation 3D.
Mes travaux s’inscrivent dans le domaine de l’optimisation combinatoire.
On utilise l’approche polyedrale pour resoudre des problemes combinatoires
qui se posent dans le contexte des reseaux de telecommunications.
On introduit et etudie le probleme d’optimisation des reseaux e composantes
connexes unicycliques. Apres avoir rappele que le probleme est facile e
resoudre en absence d’autres contraintes, on etudie de nouvelles variantes
en integrant de nouvelles contraintes techniques.
On commence par une contrainte portant sur la taille des cycles. On souhaite
interdire tous les cycles contenant au plus p sommets. Le probleme est alors
NP-Difficile. Des inegalites valides sont alors proposees pour ce probleme.
On montre sous des conditions bien precises que ces inegalites peuvent etre
des facettes. Plusieurs algorithmes polynômiaux ont ete proposes pour la
separation des inegalites valides. Ces algorithmes sont mis en oeuvre et des
resultats numeriques sont donnes.
On se focalise par la suite sur un nouveau probleme dit de Steiner
consistant e partitionner un reseau en composantes unicycliques tout en
imposant que certains sommets soient sur les cycles. On montre alors que ce
probleme est facile au sens de la complexite algorithmique en proposant un
algorithme polynomial et une formulation etendue du probleme. D’autres
contraintes techniques sont prises en compte : contraintes de degres,
contraintes sur le nombre de composantes connexes, appartenance de certains
sommets e une meme composante connexe et enfin la separation de certains
sommets qui doivent etre sur des composantes differentes.
Dans une problematique de securite ou de fiabilite, nous considerons les
versions des k aretes (sommets) les plus vitales (vitaux) et de min arete
(sommet)-bloqueur pour differents problemes de graphes. Etant donne un
probleme d’optimisation P defini sur un graphe value, le probleme "k Most
Vital Edges (Nodes) P" consiste e determiner un sous-ensemble de k aretes
(sommets) dont la suppression du graphe degrade au maximum la valeur
optimale de P. Le probleme complementaire, "Min Edge (Node) Blocker P",
consiste e supprimer un sous-ensemble d’aretes (sommets) de cardinalite
minimale tel que la valeur optimale de P est, selon la nature de P,
inferieure ou egale ou superieure ou egale e un seuil specifique. Nous
etudions la complexite, l’approximation et la resolution exacte de ces
quatre versions pour les problemes de graphes suivants : arbre couvrant
de valeur minimale, affectation de valeur minimale, stable de valeur
maximale, couverture de valeur minimale, 1-median, 1-centre, flot de
coût minimal et flot maximum. Ainsi, nous proposons des preuves de
NP-difficulte au sens fort ou de polynomialite pour des classes
particulieres de graphes, des resultats d’approximation, des algorithmes
d’enumeration explicite ou implicite pour resoudre ces problemes ou
encore une formulation par programmation lineaire.
Dans la litterature, on considere souvent qu’un algorithme d’approximation
polynomial est plus performant qu’un autre lorsqu’il possede un meilleur
rapport d’approximation en pire cas. Cependant, il faut etre conscient que
cette mesure, desormais "classique", ne prend pas en compte la realite de
toutes les executions possibles d’un algorithme (elle ne considere que les
executions menant e la plus mauvaise solution). Dans mes travaux, je me
suis focalise sur le probleme du vertex cover et j’ai tente de mieux
"capturer" le comportement des ces algorithmes d’approximation en montrant
que les performances moyennes d?un algorithme peuvent etre decorelees des
performances en pire cas, en evaluant les performances moyennes d’un
algorithme et en comparant les performances de differents algorithmes
(analytiquement et experimentalement). J’ai egalement propose un algorithme
de liste et j’ai prouve analytiquement qu’il retourne toujours une
meilleure solution que celle construite par un autre algorithme de liste
recent [ORL 2006] quand ils traitent la meme liste de sommets (dans
certains graphes particuliers, la difference de taille peut etre
arbitrairement grande).
Nous considerons une epidemie SIR se propageant sur un graphe, dont
la distribution des degres est fixee, et où les aretes sont reparties
aleatoirement, sur le principe du modele de configuration. L’evolution
de l’epidemie est totalement decrite par une equation differentielle
stochastique mettant en jeu trois mesures ponctuelles. Nous proposons
une limite de ce modele en grand graphe, et donnons en corollaire,
une preuve rigoureuse des equations epidemiologiques obtenues
par Volz (2008).
Internet-scale quantum repeater networks will be heterogeneous in
physical technology, repeater functionality, and management. The
classical control necessary to use the network will therefore face
similar issues as Internet data transmission. Many scalability and
management problems that arose during the development of the
Internet might have been solved in a more uniform fashion, improving
flexibility and reducing redundant engineering effort. Quantum
repeater network development is currently at the stage where we risk
similar duplication when separate systems are combined. We propose
a unifying framework that can be used with all existing repeater
designs. We introduce the notion of a Quantum Recursive Network
Architecture, developed from the emerging classical concept of
Recursive Networks, extending recursive mechanisms from a
focus on data forwarding to a more general distributed computing
request framework. Recursion abstracts independent transit networks
as single relay nodes, unifies software layering, and virtualizes
the addresses of resources to improve information hiding and
resource management. Our architecture is useful for building
arbitrary distributed states, including fundamental distributed
states such as Bell pairs and GHZ, W, and cluster states.
Rodney Van Meter received a B.S. in engineering and applied science
from the California Institute of Technology in 1986, an M.S. in
computer engineering from the University of Southern California in
1991, and a Ph.D. in computer science from Keio University in 2006.
His research interests include storage systems, networking, and
post-Moore’s Law computer architecture. He has held positions in both
industry and academia in the U.S. and Japan. He is now an Associate
Professor of Environment and Information Studies at Keio University’s
Shonan Fujisawa Campus. Dr. Van Meter is a member of AAAS, ACM and
IEEE.
Visual analytics suffers from the size and complexity of data
that need to be presented in an understandable manner. In this talk I
will describe different approaches we have taken in dealing with data
size and complexity, including the design of novel visual
representations and the use of large collaborative displays. I will
illustrate these approaches with user-centered designs for visualizing
and exploring graph data, such as social networks and genealogy
graphs.
Apres un bref rappel des principes et specificites de la pedagogie active, la presentation sera illustree par leur
enseignement de cycle master : mise en oeuvre, reussites et difficultes, retour des etudiants.
We consider a P2P-assisted Video-on-Demand system where
each peer can store a relatively small number of movies
to offload the server when these movies are requested.
How much local storage is needed – How does this depend on
the other system parameters, such as number of peers,
number of movies, the relative uploading capacity from
peers relative to playback rate, and the skewness in
movie popularity – How many copies should the system keep
for each movie, and how does it depend on movie popularity –
Should all movies be replicated in the P2P system, or
should some be kept at the server only – If the latter,
which movies should be kept at the server – Once we have
an understanding of these issues, can we come up with a
distributed and robust algorithm to achieve the desired
replication – Will the system adapt to changes in system
parameters over time – We will describe our work in trying
to answer these kinds of questions.
Dah Ming Chiu is a professor of the Department of Information
Engineering at the Chinese University of Hong Kong (CUHK).
He is currently serving as the department chairman.
Dah Ming received his BSc from Imperial College London, and
PhD from Harvard University. He worked in industry before
joining CUHK in 2002.
Nous presentons quelques applications exotiques des codes correcteurs: des
problemes d’ecriture sur des memoires, de biometrie et de pistage de traitres.
Queueing network models are an important tool in the evaluation of the performance of computer systems and networks. Explicit analytical solutions exist for a class of such models, but features such as realistic global dependencies, priorities, or simple commonly used service disciplines, preclude their direct application. Additionally, even when such solutions are known, their numerical computation may still be challenging due to the size of the state space of classical queueing models. In this talk, we try to show that the use of conditional probabilities may be valuable in exposing simple properties hidden from view by classical state descriptions. Examples include tandem networks with blocking, multiclass models, multi-server systems with priorities, as well as guided state sampling in large systems.
En 1978, John Krebs et Richard Dawkins, deux specialistes
des signaux dans le regne animal, ont enonce ce que l’on peut voir comme la
malediction de la communication: si les interets de l’emetteur et du recepteur convergent, la communication sera utile, rare et secrete; s’ils divergent, elle aura une forme publicitaire: pauvre, repetitive et publique. La communication humaine semble faire exception: elle est riche, abondante et
ouverte (ce qui a permis a l’industrie des telecommunications de prosperer !).
Mes resultats recents demontrent, par le calcul et la simulation, comment une communication riche peut emerger entre agents egoistes et rester stable. Pour cela, il faut plonger la communication dans un jeu social. Les pratiques de communication 2.0 (Web, blogs, Twitter...) offrent un beau test de la theorie.
Transport layer data reneging occurs when a data receiver first selectively acknowledges (SACKs) data, and later discards that
data from its receiver buffer prior to delivering that data to
the receiving application or socket buffer. Today’s reliable
transport protocols (TCP and SCTP) are designed to tolerate data
reneging. For two reasons, we argue that this design assumption
is wrong. First (1) there are potential performance gains in
send buffer utilization and throughput by not allowing reneging.
Second (2) we hypothesize that data reneging rarely if ever
occurs in practice. To support (1), we present published results
on Non-Renegable Selective Acks (NR-SACKs). To support (2), we
present a model for detecting instances of data reneging by
analyzing traces of TCP traffic. Using this model, we are currently
investigating the frequency of data reneging in Internet traces
provided by CAIDA.
In the recent years the Location Based Services (LBS) are grabbing the
attention of telecommunication actors since they are sources of new
services and new revenues. The main challenge in the LBS domain is the
localization of the mobile terminals within a certain accuracy. To this
end, several radio positioning techniques have been introduced, one of
which is the Location Fingerprinting.
Although location fingerprinting has been investigated in some previous
works, there are only few studies that analyze its performance according
to physical parameters of the underlying environment. Thus, as a first
approach, we consider an outdoor location fingerprinting system, and
we examine the impact of different physical parameters on the system
performance. As a second step, we present "clustering techniques"
which aim to compress the radio database and hence to reduce the online
computation load of the system. Any compression process may degrade
the system performance, so we try to find a clustering technique which
minimizes this degradation, and provides an acceptable level of
positioning accuracy.
Actuellement, de nombreuses bandes du spectre hertzien sont assignees
a des systemes de communications correspondant chacun a des types
d’usages bien fixes (television, cellulaire, satellite, WLAN ou PAN,
PMR, ...). On parle alors d’allocation (ou d’assignation) fixe de
spectre (FSA). Un certain nombre d’etudes visent maintenant a rendre
plus flexible et dynamique l’acces au spectre (DSA, Dynamic Channel
Allocation) des systemes radiomobiles. Ces etudes utilisent des
principes "radio cognitifs" et considerent souvent une situation avec des utilisateurs primaires et secondaires.
Apres avoir rappele ce contexte, nous presentons deux types de canaux (CCCh, cognitive control channel, CPC, cognitive pilot channel) qui, en superposition des standards actuels, peuvent faciliter des usages plus flexibles du spectre.
Le but de la psychologie experimentale est d’introduire un peu de rigueur
dans le traitement des questions concernant la perception, la motricite et
la cognition. Depuis Fechner (1866), la psychophysique s’efforce de
quantifier la relation liant nos sensations subjectives aux grandeurs
physiques. Il existe une tradition similaire, moins connue mais presque
aussi ancienne, visant a identifier l’interdependance de la vitesse et de
la precision de nos mouvements. On a bien avance depuis Woodworth (1899),
notamment avec la demonstration de ce qu’on appelle la loi de Fitts (1954):
dans le pointage vers une cible, le temps necessaire a l’execution du
mouvement varie generalement comme le logarithme du rapport entre la
distance a parcourir et la tolerance de la cible.
La loi de Fitts est d’application quasi routiniere en interaction
homme-machine, ou elle aide les designers d’interfaces a optimiser la
configuration de leurs objets graphiques, cibles du pointage de
l’utilisateur. Mais cette regularite empirique n’est valide que sur une
gamme d’echelle limitee : elle n’est plus d’aucun secours, en particulier,
quand il s’agit de modeliser des mouvements de tres petite amplitude comme
ceux d’un doigt sur l’ecran d’un montre-bracelet faisant office de
telephone. En fait, comme j’essaierai de l’expliquer dans l’expose, il est
impossible de saisir le role du facteur d’echelle dans le probleme du
pointage sans mettre en question la definition usuelle des variables du
probleme.
Dans la cryptographie a clef publique distribuee usuelle, chaque
utilisateur choisit une clef privee de taille donnee et publie la clef
publique associee. Dans le cas du chiffrement ElGamal par exemple, la clef
publique est alors le produit de ces clefs publiques, et la clef privee la
somme des clefs privees, ce qui permet de faire du dechiffrement
distribue. Mais il est illusoire en pratique de demander a des
utilisateurs de retenir de grandes clefs de forte entropie.
Nous introduisons ici la notion de cryptographie distribuee a base de mots
de passe, dans laquelle les joueurs n’ont besoin de retenir que de petits
mots de passe (de faible entropie). Il n’est ici plus possible de publier
les clefs publiques associees, car ces dernieres permettraient alors de
retrouver, a partir d’une recherche exhaustive, les mots de passe
initiaux. Dans un tel cadre, la clef privee est donc definie implicitement
comme la combinaison des mots de passe de faible entropie detenus par
differents utilisateurs. Sans jamais reveler cette clef privee, les
utilisateurs vont pouvoir calculer et diffuser la clef publique associee.
Ils peuvent ensuite effectuer ensemble des operations a clef privee
(telles que du dechiffrement) en echangeant des messages a travers un
canal arbitraire, a partir de leurs mots de passe respectifs, sans jamais
devoir partager leurs mots de passe ou reconstruire la clef.
Nous donnons un exemple concret d’un tel protocole, base sur le
chiffrement ElGamal, qui possede deux variantes: la premiere, simple et
efficace, repose sur l’hypothese Diffie-Hellman decisionnelle; la seconde
utilise des techniques a base de couplages et est sure sous l’hypothese
lineaire decisionnelle. L’interet de cette deuxieme variante est de se
generaliser a un certain nombre de cryptosystemes a clef publique bases
sur le logarithme discret, ce qui inclut en particulier le chiffrement
lineaire et le chiffrement base sur l’identite. Ceci permet d’etendre
l’IBE de facon distribuee, en effectuant la generation de clef par un
groupe de personnes, chacune d’elle memorisant une petite portion de la
clef maitre.
Dans ces modeles, la totalite des utilisateurs doivent cooperer pour
pouvoir retrouver (au moins implicitement) la clef privee et effectuer du
dechiffrement. Modifier le protocole de cryptographie distribuee a base de
mots de passe pour qu’il devienne "a seuil" (c’est-a-dire que
t utilisateurs parmi n suffisent) est encore un probleme ouvert.
Dans de nombreux cas, les donnees que l’on veut comprendre peuvent se representer comme un nuage de points dans un espace multidimensionnel.
Lorsque ces donnees sont representees dans le plan, la visualisation est un moyen efficace d’extraire des informations de nature statistique, geometrique et topologique. Lorsque les donnees ne se representent pas naturellement dans le plan, il existe deux voies d’analyse possibles.
D’une part, les methodes de projection dans le plan et d’autre part, les methodes d’analyse in situ dans l’espace multidimensionnel. Toutes deux posent le probleme de l’intelligibilite de la representation ainsi construite.
La premiere partie de cet expose est dediee aux methodes de projection.
Nous presenterons quelques methodes (lineaires et non-lineaires) classiques et moins classiques. En particulier, nous introduirons la methode DD-HDS (Data-Driven High Dimensional Scaling) qui se fixe pour objectif la preservation des distances entre donnees, et la methode RankVisu qui se focalise sur la preservation des rangs de voisinage.
Enfin, nous evoquerons Classimap qui tient compte d’informations additionnelles (comme l’appartenance a des classes) pour la projection.
Dans la seconde partie, nous presenterons le paradigme de la visualisation multidimensionnelle in situ, qui definit un cadre dans lequel les projections planes de donnees multidimensionnelles sont interpretables. Nous montrerons que les methodes de projection non-lineaires (type ISOMAP, MDS, Sammon, LLE, KPCA, SOM) telles qu’elles sont habituellement employees ne sont en general pas utilisables pour inferer sans erreur des proprietes des donnees originelles. Nous montrerons comment replacer ces methodes dans le cadre in situ pour leur rendre tout l’interet qu’elles meritent.
Plusieurs exemples sur des donnees de grande dimension seront presentes, et nous discuterons des problemes ouverts dans ce domaine.
In this talk we seek to characterize the behaviour of the Internet in the
absence of congestion control. More specifically, we assume all sources
transmit at their maximum rate and recover from packet loss by the use of
some retransmission or erasure coding mechanism. We estimate the
efficiency of resource utilization in terms of the maximum load the
network can sustain, accounting for the random nature of traffic.
Contrary to common belief, there is generally no congestion collapse.
Efficiency remains higher than 90% for most network topologies as long as
maximum source rates are less than link capacity by one or two orders of
magnitude. Moreover, a simple fair drop policy enforcing fair sharing at
flow level is sufficient to guarantee 100% efficiency in all cases.
Les lois qui regissent le comportement de la matiere et de la lumiere a
l’echelle atomique ou a l’echelle de quelques photons different de
celles de la physique dite classique. Il faut en effet appliquer la
physique quantique et son formalisme, et les progres de cette theorie
ont permis, au cours du 20eme siecle, de developper avec succes des
applications essentielles a notres societe de l’information tels que le
laser ou le transistor.
L’information quantique est un domaine de recherche qui a commence a se
developper dans les annees 70, a la frontiere entre l’informatique et la
physique. L’idee est d’analyser dans quelle mesure le fait d’utiliser de
l’information codee sur des etats quantiques (on peut definir la notion
de bit quantique representant l’etat d’un systeme a 2 niveaux) peut
permettre de realiser de nouvelles taches dans le domaine du calcul et
des communications.
Les rapides avancees de la recherche en information quantique sont de
nature a bouleverser notre comprehension ainsi que notre facon de faire
de la cryptographie : en 1994, Peter Shor a demontre que le probleme de
la factorisation etait un probleme facile, dans le cadre du calcul
quantique. Ce resultat fondamental eclaire d’un jour nouveau le
"paysage cryptographique classique", et le theme de Post-Quantum
Cryptography est desormais tout en haut de la liste de la communaute
cryptographique internationale. Par ailleurs, la distribution quantique
de cle a connu un formidable essor depuis la proposition du premier
protocole, BB84, par Charles Bennett et Gilles Brassard, en 1984. Cette
technologie est en passe de devenir la premiere application industrielle
de l’information quantique, et trouvera ses applications dans la
construction d’architectures numeriques de tres haute securite.
Durant cet expose, j’evoquerai en particulier les resultats majeurs
obtenus dans le cadre du projet europeen FP6 SECOQC, auquel l’equipe
d’information quantique de Telecom ParisTech a apporte une contribution
de tout premier ordre. Je discuterai egalement des resultats obtenus
dans nos autres axes de recherches, melant information quantique et
cryptographie et des projets en cours, et notamment la plateforme
"Securite Quantique". Enfin, je presenterai la spin-off SeQureNet,
start-up que j’ai confonde en 2008, et qui vise a commercialiser des
applications de haute securite basees sur la distribution quantique de
cles.
Le "cloud" se construit sur la base de services isoles, qui
peuvent etre agreges selon leurs interfaces et leurs niveaux de
virtualisation bien definis. Les grilles de calcul, a l’inverse,
reunissent des ressources et des applications de multiples institutions.
Elles permettent une grande flexibilite dans la distribution des taches
et des donnees. Nous mettrons en parallele les differents types de
grilles et leurs applications. Dans leur forme la plus generale, les
grilles soulevent des problemes de passage a l’echelle et
d’heterogeneite des ressources et des taches. Mais leur probleme le plus
fondamental est celui de l’independance des participants. Nous
analyserons en particulier l’evolution des grilles du CERN, d’une
gestion uniformisee des ressources a une specialisation des systemes
d’allocation et une interoperabilite sur le modele du cloud.
Avec le developpement du Web, le volume des donnees manipulees
par les moteurs de recherche, les sites de commerce electronique ou les
sites communautaires rassemblant des millions d’utilisateurs, a atteint
des niveaux inedits: le teraoctets est un ordre de grandeur courant,
bientot ce sera le petaoctets. De nouvelles techniques de gestion de ces
donnees massives ont emerge recemment, sous l’impulsion notamment des
entreprises (Google, Amazon) directement confrontees aux problemes lies
a ces volumes inedits.
L’expose sera consacre a ces nouvelles techniques, en mettant l’accent
sur les solutions s’appuyant sur la distribution du stockage et des
traitements dans des parcs de machines extensibles. Les problemes de
passage a l’echelle, de fiabilite, de securite, de reprise sur panne et
de coherence seront evoques. Je presenterai quelques solutions-phares,
fortement influencees par quelques articles publies recemment par les
equipes de Google (GFS, Bigtable, MapReduce). Enfin, le projet Hadoop,
qui fournit une plate-forme Open Source implantant ces techniques, sera
brievement introdui.
Le presentation sera un survol des cas d’utilisation cles et
des grandes lignes de l’architecture de la nouvelle plateforme
logicielle Biocep-R (www.biocep.net) et sera suivi de demonstrations.
Batie autour des environnements de calcul mathematique et statistique
tels que R et Scilab, la plateforme Biocep-R ameliore de facon tres
significative l’accessibilite du calcul haute performance sur les
grilles et les Clouds. Elle cree un environnement ouvert, ou il devient
facile de concevoir, de partager et de reutiliser tout ce que le calcul
met en jeu et place la collaboration au coeur des outils des sciences
computationnelles et de fouille de donnees. Par exemple, il devient
possible a chacun, a partir de sa machine personnelle (ou de son
iPhone), de lancer une machine virtuelle de son choix sur le cloud
d’amazon et de l’utiliser pour avoir dans son navigateur, un
environnement complet et collaboratif d’analyse de donnees qui combine
les outils tel que R avec des panneaux graphiques interactifs, des
feuilles de calcul collaboratives, des interfaces graphiques analytiques
composables par glisser-deposer et redistribuables, des outils de
presentation a distance, etc.
Biocep-R est aussi une boite a outils qui permet de deployer des groupes
de moteurs de calculs distribues avec ou sans etat sur des
infrastructures heterogenes. Ces moteurs peuvent etre utilises soit pour
batir des applications web a contenu analytique dynamique qui tiennent a
l’echelle soit pour du calcul parallele sur des donnees massives soit
pour exposer automatiquement (par introspection) des fonctions et des
modeles sous forme de Services Web ou de noeuds pour des plateformes de
Workflow.
Les codes geometriques ont ete introduits au debut des annees 80
par le mathematicien et ingenieur Russe V.D. Goppa. Dans les annees
qui ont suivi ils se sont averes etre un sujet de recherche aussi
fructeux que passionnant.
Dans cet expose nous commencerons par presenter les rudiments de la
theorie des codes correcteurs d’erreurs puis des codes geometriques
sur des courbes algebriques. Dans un second temps, nous nous
focaliserons sur la theorie des codes sur les surfaces, a la fois
moins comprise et nettement moins exploree et presenterons des
methodes geometriques d’estimation des parametres de tels codes et
de leurs duaux
20100325
9h00 - 18h00 @
Amphi B312
Voir le prohramme http://www.inrets.fr/fileadmin/recherche/fsd/ntic/plaquette-seriousgames.pdf
A few months ago, BitTorrent developers announced that the transfer of
torrent data in the official client was about to switch to a new
application-layer congestion-control protocol using UDP at the
transport-layer. This announcement immediately raised an unmotivated buzz
about a new, imminent congestion collapse of the whole Internet. As this new
protocol, which undergoes the name of LEDBAT for Low Extra Delay Background
Transport, aims at offering a /lower/ than best effort transport service,
this reaction was not built on solid technical foundation. Nevertheless, a
legitimate question remains: whether this new protocol is a necessary
building block for future Internet applications, or whether it may result in
an umpteenth addition to the already well populated world of Internet
congestion control algorithms.
We tackle the issue of LEDBAT investigation using two complementary
approaches. On the one hand, we implement the novel congestion control
algorithm and investigate its performance by means of packet-level
simulations. Considering a simple bottleneck scenario, where the new
BitTorrent competes against either TCP or other BitTorrent flows, we
evaluate the fairness of resource share as well as the protocol efficiency.
Our results show that the new protocol successfully meets some of its
design goals, as for instance the efficiency one. At the same time, we also
identify some potential fairness issues, that need to be dealt with.
On the other hand, we use an empirical approach and perform an experimental
campaign on an active testbed. With this methodology, we study different
flavors of the LEDBAT protocol, corresponding to different milestones in the
BitTorrent software evolution. Focusing on single flow scenario, we
investigate emulated artificial network conditions, such as additional
delay and capacity limitation. Then, in order to better grasp the
potential impact of LEDBAT on the current Internet traffic, we consider a
multiple flow scenario, and investigate the performance of a mixture of
TCP and LEDBAT flows, so to better assess what “lower-than best effort”
means in practice.
Overall, our results show that LEDBAT has already fulfilled some of its
original design goals, though some issues (e.g., fairness and level of low
priority) still need to be addressed. Finally, we point out that end-users
will be the final judges of the new protocol: therefore, further research
should evaluate the effects of its adoption on the performance of the
applications ultimately relying on it.
The Autonomic Computing vision (www.research.ibm.com/autonomic) aims at
tackling the increasing complexity, heterogeneity and scale of software
systems by enabling computing systems to manage themselves, while
minimising the need for human intervention. The success of this vision
is becoming critical to the computing domain and consequently to our
society, which increasingly relies on computerised systems. Nonetheless,
the actual design and implementation of autonomic systems remains, to
this day, a great challenge.
The objective of our research is to design and implement generic
architectures and frameworks for facilitating the development and
evolution of autonomic management solutions. Defining such reusable
frameworks raises new challenging issues, not least because we are
dealing with dynamic, non-deterministic and sometimes conflicting
reasoning processes. When autonomic systems must react to complicated
and unpredictable scenarios, the space of detectable conditions and
desirable decisions grows exponentially. In these cases, it becomes
difficult, or impossible, to statically predict all possible situations
and provide all necessary solutions in one central controller. The
approach where developers fully specify and control the overall
application behaviour is in such cases hard to apply.
To address these challenges, we propose to build complex administrative
strategies by opportunistically integrating simple, specialized
autonomic elements. A precise, exhaustive specification of all control
directives (e.g. monitoring, analysis, planning and execution) is no
longer required. The essential difficulty of the proposed approach
consists in ensuring that the resulting management reactions conform to
the required system behaviour. Different approaches are possible for
designing and implementing the required integration functionalities
(e.g. conflict resolution and synchronisation). These range from
centralised solutions with a unique control point to completely
decentralised solutions relying on specific communication protocols. A
service-oriented approach was adopted for defining and developing such
architectures, considering the inherent modularity and loose-coupling
characteristics of the service paradigm.
The presentation will be structured in two parts. First, we will
introduce the Autonomic Computing domain and the associated solution
development problems. Various possible architectures that implement our
vision for addressing these problems will be introduced and compared.
Second, we will focus on one of these architectures, exploring a
completely decentralised design for the conflict resolution aspect. The
presentation will detail this architecture and its corresponding
prototype implementation, testing scenario and initial results. The
sample application considered for experimentation consists of a
simulated home with manageable devices and conflicting temperature and
electricity consumption goals.
L’interet pour les protocoles et les algorithmes auto-organisants
qui s’est manifeste notamment avec la popularite des services de partage
de fichiers (/file sharing/) ou VoIP concerne maintenant un plus large
domaine d’applications. En particulier, il a favorise l’essor des
services de stockage pair-a-pair (P2P). Ces services permettent
l’utilisation efficace de tout espace disque libre et inexploite pour
construire un systeme de stockage fiable, disponible, passant a
l’echelle et avec des couts d’entretien reduits. Le stockage P2P suscite
cependant des enjeux de securite qui doivent etre traites, en
particulier en ce qui concerne l’egoisme des pairs qui est en cause du
parasitisme (/free-riding/) dans le systeme. L’observation continue du
comportement des pairs par le controle regulier du stockage est une
condition importante pour securiser un tel systeme contre ces attaques.
Detecter l’egoisme des pairs exige des primitives appropriees comme les
preuves de possession de donnees, une forme de preuve de connaissance
avec laquelle le pair de stockage essaye interactivement de convaincre
le verificateur qu’il possede les donnees sans les envoyer ou les copier
chez le verificateur. Dans ce seminaire, nous proposons et passons en
revue plusieurs protocoles de verification. Nous etudions en particulier
comment la verification et la maintenance de donnees peuvent etre
deleguees a des pairs volontaires afin de mitiger la dynamicite des
pairs. Nous proposons alors deux mecanismes, l’un base sur la reputation
et l’autre sur la remuneration, pour imposer la cooperation au moyen de
telles preuves de possession de donnees periodiquement fournies par les
pairs assurant le stockage. Nous evaluons l’efficacite de telles
incitations avec des modeles de la theorie des jeux. Nous discutons en
particulier l’utilisation des jeux non-cooperatifs Bayesiens repetes
ainsi que celle des jeux evolutionnaires.
Ethernet local area network traffic appears to be approximately
statistically self-similar. This discovery, made about twenty years ago,
has had a profound impact on the field. I will try to explain what
statistical self-similarity means, how it is detected and indicate how
one can construct random processes with that property by aggregating a
large number of "on-off" renewal processes. If the number of
replications grows to infinity then, after rescaling, the limit turns
out to be the Gaussian self-similar process called fractional Brownian
motion. But if one looks at very large time scales, then one obtains
instead a Levy stable motion which is a process with
independent increments, infinite variance and heavy tails.
The lecture, which is an overview on this subject, will be in French.
We investigate some statistical features of simplicial complexes generated
by homogeneous Poisson point processes. To do that, we consider that the
points of a homogeneous Poisson point process generates a Rips complex in
some region, so we can use some results from algebraic topology as well
as some tools concerning the Poisson space, such as Malliavin calculus and
concentration inequalities. We obtain the limit of distributions of number
of k-simplices, Betti numbers and Euler characteristics. Besides, we find
some statistics of the studied quantities, like the mean and variance of
k-simplices and the mean of the Euler characteristics. The simplicial
complex represents the simplest protocol of decentralized sensor networks
where sensors can only receive/transmit theirs ID’s from/to close sensors.
En 1999, Nick Chater publia un article fondateur intitule : /The search for simplicity: A fundamental cognitive principle/ Il est probable qu’a l’epoque, il ait meme sous-estime la portee de ce principe de simplicite, dont nous commencons a mesurer l’importance, jusque la insoupconnee, dans la selection et l’organisation des informations traitees par notre cerveau. Le principe de simplicite permet de faire des predictions qualitatives et quantitatives dans des domaines traditionnellement reputes opaques a la modelisation. Par exemple, les evenements que nous trouvons interessants (improbables, emotionnels, dignes d’etre communiques) s’accompagnent systematiquement d’un contraste de complexite. La theorie de la simplicite debouche sur une redefinition de la notion d’information.
Site de la theorie de la simplicite : http://www.simplicitytheory.org
The talk presents the performance of pipeline network coding for multicast stream distribution in high loss rate MANET scenarios. Most of the previous network coding implementations have been based on batch network coding, where all blocks in the same batch are mixed together. Batch coding requires that the entire batch is received before decoding at destination. Thus, it introduces high decoding delays that impact the stream reception quality. Instead of waiting for the entire batch (i.e., generation), pipeline network coding encodes/decodes packets progressively. Consequently, pipeline network coding yields several benefits: (1) reduced decoding delay, (2) further improved throughput, (3) transparency to higher layers (UDP, TCP, or other applications), (4) no special hardware support and (5) easier implementation. We show performance gain of pipeline coding compared to batch coding via extensive simulation experiments.
Dr. Gerla was born in Milan, Italy. He received a graduate degree in engineering from the Politecnico di Milano, in 1966, and the M.S. and Ph.D. degrees in engineering from UCLA in 1970 and 1973, respectively. He joined the Faculty of the UCLA Computer Science Department in 1977. His research interests cover the performance evaluation, design and control of distributed computer communication systems and high speed computer networks (B-ISDN and Optical Networks).
Robust reception of audiovisual signals (and especially video) has been widely studied, demonstrating that the socalled Joint Source and Channel Decoding (JSCD) approach could improve the receiver performance while being compatible
with existing standards (in fact they make the best possible use of the received data). However, in contrast with most approaches dealing with the cross layer strategy, JSCD does not take into account the actual structure of the communication chain, in which the network layers packetize the data, add headers, etc. Several consequences resulted from this fact: (i) JSCD is not making use of redundancies that are introduced by the network layers, e.g., CRCs, even if this redundancy may be quite large; (ii) JSCD is implicitly
assuming that a large part of the actual bitstream (essentially headers of all sorts) was received without errors; (iii) JSCD in its initial statement is not compliant with most transmission systems.
This talk will introduce Joint Protocol and Channel Decoding (JPCD), to be used jointly with JSCD, which aims is to solve these inconsistencies. More specifically, the talk will focus on two problems adressed using JPCD, namely reliable packet synchronisation and header recovery. In both cases, JPCD exploits the redundancy present in the protocol stack to improve the performance of data transmission over wireless networks.
Michel Kieffer obtained in 1995 the Agregation in Applied Physics at the Ecole Normale Superieure de Cachan, France. He received a PhD degree in Control and Signal Processing in 1999, and the HDR degree in 2005, both from the Univ Paris-Sud, Orsay, France.
Michel Kieffer is an assistant professor in signal processing for communications at the Univ Paris-Sud and a researcher at the Laboratoire des Signaux et Systemes, CNRS - SUPELEC - Univ Paris-Sud, Gif-sur-Yvette, France. From sept. 2009 to sept. 2010, he has been invited professor
at the Laboratoire Traitement et Communication de l?Information, CNRS - Telecom ParisTech. His research interests are in joint source-channel coding and decoding techniques for the reliable transmission of multimedia contents. He is also interested in guaranteed parameter and state estimation for systems described by non-linear models.
Michel Kieffer is co-author of more than 90 contributions in journals, conference proceedings, or books. He is one of the co-author of the book "Applied Interval Analysis" published by Springer-Verlag in 2001 and of the book "Joint source-channel decoding: A crosslayer perspective with applications in video broadcasting" published by Academic Press in 2009. He is associate editor of Signal Processing since 2008.
We discuss a novel approach for the sound orchestration of services,
based on expressing jointly behaviours and their types. We introduce
Orcharts, a behaviour language for service orchestration and Typecharts,
an associated behavioural typing language. Sessions play a pivotal
role in this approach. Orcharts (orchestration charts) define session
based services and Typecharts provide for session types with complex
interaction patterns that generalise the request/response interaction
paradigm. We provide an algorithm for deciding behavioural well
typedeness and discuss the properties of well typed Orcharts.
(Joint work with Alessandro Fantechi).
We present some theoretic and heuristic estimates for the number of
elliptic curves with low embedding which is essential for their
applicability in pairing based cryptography. We also give estimates
for the number of fields over which such curves may exist. The main
ideas behind the proofs will be explained as well. Finally, we give a
heuristic analysis of the so-called MNT algorithm and show that it
produces a rather "thin" sequence of curves..
La vocation de ce groupe est d’offrir des possibilites d’echanges
entre chercheurs academiques et industriels, autour de themes lies
a l’optimisation des reseaux dans tous les domaines d’application :
- eau, - energie, - logistique, - telecommunications, - transport,
- etc. Les reseaux modelisent des problemes dont la resolution
fait appel a divers domaines de l’optimisation : - mathematiques
discretes, - methodes exactes ou approchees, - modele de flot et de
multiflot, - optimisation continue, - optimisation deterministe et
stochastique, - programmation lineaire et non lineaire, - theorie
des graphes, - etc.
Faisant suite aux precedentes rencontres (27 octobre 2006 a
l’Institut Henri Poincare, 25 octobre 2007 a Gaz de France, 14 mai
2008 a Orange Labs R&D, 13 octobre 2008 a l’ecole nationale des
ponts et chaussees, 9 septembre 2009 a l’Union internationale des
chemins de fer), cette journee s’adresse aux chercheurs, etudiants
et industriels qui souhaitent partager leurs points de vue et leurs
attentes sur l’optimisation dans les reseaux.
L’inscription est gratuite mais obligatoire. Si vous souhaitez
assister a cette journee, nous vous invitons a vous faire connaitre
aussi rapidement que possible en envoyant vos nom, prenom et
affiliation a Olivier Hudry, a l’adresse hudry [at] enst [dot] fr.
Si vous souhaitez y presenter une communication (environ 30
minutes), merci de preciser en outre un titre (un resume de
quelques lignes sera aussi a prevoir, mais il pourra etre envoye
ulterieurement).
Les renseignements sur le programme de la journee et l’acces a
Telecom ParisTech seront precises ulterieurement.
Produits derives : vous en avez entendu parler, on vous a dit que c’etait des
mathematiques super-compliquees. En fait, pas forcement. On essaiera ici de
montrer qu’on peut tout de meme en comprendre les principes sous-jacents avec
des outils simples tels que les systemes lineaires a deux equations et deux
inconnues, le jeu de pile ou face, une pincee de geometrie dans le plan.
Resource partitioning is increasingly used in the development of
real-time and embedded systems to enable independent development of
resource-aware components, ensure performance of components as well as
their independence. Several analytical techniques based on real-time
scheduling theory have been proposed in the literature. These
techniques are defined for specific task models and are hard to
generalize. On the other hand, scheduling analysis techniques based on
formal methods are much applicable to arbitrary task models. However,
existing approaches are limited to modeling task demand and assume full
resource availability. In this work, we present a formal model for
resource design and supplies inspired by the process algebra ACSR. We
explicitly represent requested as well as granted resources and define
parallel composition of tasks and supplies by matching resource requests
with resource grants. Based on these notions, we develop a
compositional theory for schedulability analysis.
Oleg Sokolsky is a Research Associate Professor at the University of
Pennsylvania Department of Computer and Information Science. He has
studied a wide variety of topics related to the development of
high-assurance embedded and real-time systems, in particular the
application of formal methods to timing analysis. He has received Ph.D.
in Computer Science from Stony Brook University in 1996.
Nous considerons les reseaux euclidiens, c-a-d des Z-modules de rang n
dans R^n.
Plusieurs constructions de ces reseaux sont connues :
- Construction a partir de codes definis sur des corps finis,
- Construction du reseau a partir de son groupe d’automorphismes,
- La theorie du collage ou le reseau est la somme directe de sous-reseaux,
- La construction par couche donnant lieu aux reseaux lamines,
- et plus recemment les reseaux dits LDLC (Low-Density Lattice Codes) ou
l’inverse de la matrice generatrice du reseau est creuse.
Nous decrivons une nouvelle construction de reseaux reels (LDLC
et non LDLC) ayant une diversite maximale pour le decodage a maximum
de vraisemblance. Ensuite, en examinant l’image binaire de la matrice
du reseau, nous etablissons une deuxieme construction de reseaux reels
LDLC ayant une diversite maximale pour le decodage probabiliste iteratif.
In wireless networks, transmissions can be overheard by unintended nodes in the vicinity of the sender and receiver, potentially causing interference to their own communications. The research literature
abounds with "solutions" that attempt to overcome the interference
using scheduling, channel assignment, and many other mechanisms. On the
other hand, in recent years, there has been growing attention to
methods that aim to take advantage of the broadcast nature of the
wireless medium and the ability of nodes to overhear their neighbors’
transmissions. Two of the most important such methods are opportunistic
routing (OR) and wireless network coding (NC). In this talk, I overview
the principles of these methods and study the potential benefits of
forwarding schemes that combine elements from both the OR and NC
approaches, when traffic on a bidirectional unicast connection between
two nodes is relayed by multiple common neighbors. In particular, I
will present a dynamic programming algorithm to find the optimal scheme
as a function of link error rates, and demonstrate that it can achieve
up to 20% reduction in the average number of transmissions per packet
compared to either OR or NC employed alone, even in a simple scenario
of two common neighbors between the connection endpoints.
Lavy Libman is a senior lecturer in the School of Information
Technologies, University of Sydney, which he joined in February 2009. He
also continues to be associated with the Networked Systems research
group at NICTA (formerly National ICT Australia), where he was a
researcher since September 2003. He is currently serving as a TPC
co-chair of ICCCN 2010 and WiOpt 2010, a guest editor of the Journal of
Communications (JCM) special issue on Road and Vehicular Communications
and Applications, and is regularly involved in the committees of several
other international conferences. He is an IEEE senior member and
holds a PhD degree in Electrical Engineering from the Technion - Israel
Institute of Technology since 2003. His research revolves around the
design and optimization of wireless and mobile networks, with a
particular interest in cooperative and opportunistic techniques and
game-theoretic modeling.
For a binary (plus/minus one) finite sequence, the peak sidelobe
level (PSL) is defined as the maximum, over nonzero shifts, of the
scalar product of the sequence with its aperiodically shifted version.
Binary sequences with low PSL are of importance for synchronization
in TIME and determining POSITION and distance to an object. In
theoretical physics, study of the PSL landscape was introduced by
Bernasconi via the so-called Bernasconi model, which is fascinating
for the fact of being completely deterministic, but nevertheless
having highly disordered ground states (sequences with the lowest PSL)
and thus possessing striking similarities to the real glasses (spin
glass models), with many features of a glass transition exhibited.
The problem of designing and characterizing sequences with low PSL
has been attacked for at least fifty years, however our knowledge
is still far from being satisfactory.
In the talk I will survey the main open issues and report on
several new results:
- We show that the typical PSL of binary sequences is proportional
to \sqrtn ln n, thus improving on the best earlier known
result due to Moon and Moser and settling to the affirmative a
conjecture of Dmitriev and Jedwab;
- We show that the maximum of PSL in m-sequences is proportional
to 2^m/2 ln m, thus disproving a long-standing conjecture of
it being 2^m/2.
The results are partly due to cooperation with N. Alon,
Ye. Domoshnitsky, A. Shpunt and A. Yudin.
Recently several very efficient so called side-channel attacks on
cryptographic devices and smartphones have been developed. The goal of
these attacks is to learn the secret information (such as a private
key, password, etc). For example, it takes less than a minute to break
an Advanced Encryption Standard (AES) device and get a 128-bit secret
key.
I will present the design approach for reliable and secure devices
with built-in self-protection against these attacks. When the device
detects that it is under attack it disables itself. The approach is
based on special robust error detecting codes which we developed in
our laboratory. As oppose to the classical error detecting codes for
communication channels, the proposed robust codes have uniformly
distributed error detecting capability and provide for equal
protection against all errors. These codes can provide for a high
reliability and a high security even for strong attacks when the
attacker knows the code being used and can inject any error pattern.
I will describe the constructions for these codes and their
applications for design of reliable and secure hardware for computer
memory, multipliers and for hardware for AES. We will present some
experimental data on the required overheads and power consumption and
compare the proposed approach based on robust codes with approaches
based on randomized linear codes.
A large number of automatic tasks on real-world data
generate imprecise results, e.g., information extraction, natural
language processing, data mining. Moreover, in many of these tasks,
information is represented in a semi-structured way, either due to an
inherent tree-like structure of the original data, or because it is
natural to represent derived information or knowledge in a hierarchical
manner. A number of recent works have dealt with representing uncertain
information in XML. We present a high-level overview of these works,
discussing in particular models, expressiveness, and query efficiency.
Finally, we aim at providing insight into important open problems for
probabilistic XML, by discussing the connection with relational database
models, the limitations of existing frameworks, and other topics of
interest.
I will begin by briefly introducing calm computing and how it relates to
Wireless Sensor Networks (WSN) and Autonomic computing. Then I will
present a very simple protocol we developed to provide self-* in WSN and
show some of its behaviours and how we optimised it. This protocol is
based on established bio-inspired approaches, therefore I’m hoping I can
highlight how easy and yet difficult adapting such techniques are,
specifically on a WSN, and hope to provide some insight into how one
should engineer emergent solutions in general.
McCann’s work centers around architectures, algorithms, protocols and
tools that allow computer systems to self-adapt to their environment to
improve their performance or quality therein. She has published
extensively in the areas of computer performance, dynamic operating
systems, database machines and text retrieval systems. More recent work
has focused on self-adaptive and self-management of content delivery
systems, mobile computing and wireless sensor networking algorithms. For
her earlier work in text retrieval, she was recently co-awarded Emerald
Literati Network "Highly Commended". To date, she has been PI on 13 major
government/industry funded research projects, and is CI on three EPSRC
Networks of excellence. She has supervised five PhD students to successful
completion, examined many more and currently leads a nine strong team
researching Self-adaptive and Bio-inspired Computing, which have developed
the Beastie wireless sensor node.
All her projects are interdisciplinary, applied to the arts or engineering
- collaborating with RCA; The University of the Arts; Interactive
Institute Stockholm etc.; as well as many other industrial partnerships
such as with Sun Microsystems, Thames and Severn Trent Water, BT, Arup
Engineering and BBC to name but a few.
She is an active programme committee member for many of the self-managing
adaptive computing journals and conferences. She has also co-chaired
conferences such as IEEE Intl. Conf. on Complex Open Distributed Systems
and the ACM Intl. Conf. on Pervasive Services 2008 and 2009 (General
Chair). McCann is regularly invited to talk on self-adaptive computing to
diverse audiences and has been an invited panel member ACM/IEEE
International Conference in Autonomic Computing and IFIP/IEEE
International Symposium on Integrated Network Management. She is a member
of the BCS, IEEE and a Chartered Engineer.
Reduction of unnecessary energy consumption is becoming a major concern in wired
networking, in reason of both the potential economical benefits and its forecas
t environmental impact. These issues, usually referred to as “green networking”, relate to embedding energy-awareness in networks design, devices and protocols.
In this tutorial, we first phrase a more precise definition of the “green” attribute, identifying furthermore a few paradigms that are the key enablers of energy-aware networking research. We then overview the current state of art,
providing a taxonomy of the relevant work: from a high level perspective, we
identify four branches of green networking research, that stem from different observations on the root causes of energy waste. Such branches can be identified, namely, as (i) Adaptive Link Rate, (ii) Interface proxying, (iii) Energy-aware infrastructures and (iv) Energy-aware applications. The covered material will not only dig into specific proposal pertaining to one of the above branches, but also offer a perspective look on the open research point.
In this talk we discuss the energy-aware cooperative management
of the cellular access networks of the operators that offer service
over the same area, and we evaluate the amount of energy that
can be saved by using all networks in high traffic conditions,
but progressively switching off networks during the periods
when traffic decreases, and eventually becomes so low that
the desired quality of service can be obtained with just one network. When a network is switched off, its customers are allowed to roam
over those that remain powered on. Several alternatives are studied,
as regards the traffic profile, the switch-off pattern, the energy
cost model, and the roaming policy.
Numerical results indicate that a huge amount of energy can be saved
with an energy-aware cooperative management of the networks, and
suggest that, to reduce energy consumption, and thus the cost
to operate the networks, new cooperative attitudes of the operators
should be encouraged with appropriate incentives.
Marco Ajmone Marsan is a Full Professor at the Electronics Department of Politecnico di Torino, in Italy, and a part-time Chief Researcher at IMDEA Networks (www.imdea.org/networks) in Madrid. He is the founder and the leader of the Telecommunication Networks Group at the Electronics Department of Politecnico di Torino.
Marco Ajmone Marsan holds degrees in Electronic Engineering from Politecnico di Torino and University of California, Los Angeles.
Marco Ajmone Marsan was at Politecnico di Torino\u2019s Electronics Department from November 1975 to October 1987 - first as a researcher and then as an Associate Professor. He was a Full Professor at the University of Milan\u2019s Computer Science Department from November 1987 to October 1990. From September 2002 to March 2009 he was the Director of the Institute for Electronics, Information and Telecommunications Engineering of the National Research Council. From 2005 to 2009 he was the Vice-Rector for Research, Innovation and Technology Transfer at Politecnico di Torino.
During the summers of 1980 and 1981, he was with the Research in Distributed Processing Group, Computer Science Department, UCLA. During the summer of 1998 he was an Erskine Fellow at the Computer Science Department of the University of Canterbury in New Zealand.
He has co-authored over 300 journal and conference papers in Communications and Computer Science, as well as the two books "Performance Models of Multiprocessor Systems," published by the MIT Press, and "Modelling with Generalized Stochastic Petri Nets," published by John Wiley.
In 1982, he received the best paper award at the Third International Conference on Distributed Computing Systems in Miami, Florida. In 2002, he was awarded a honorary doctoral degree in Telecommunications Networks from the Budapest University of Technology and Economics. He was named Commander of the Order of Merit of the Italian Republic in 2006. He is the chair of the Italian Group of Telecommunications Professors, and the Italian Delegate in the ICT Committee of the 7th Framework Programme of the EC.
He has been the principal investigator in national and international research projects dealing with telecommunication networks. His current interests are in the performance evaluation of communication networks and their protocols.
Marco Ajmone Marsan is a Fellow of IEEE, and a corresponding member of the Academy of Sciences of Torino. He is a member of the steering committee of the IEEE/ACM Transactions on Networking. He participates in a number of editorial boards of international journals, including the Computer Networks Journal by Elsevier. He is listed by ISI among the highly cited researchers in Computer Science and a member of the Gruppo 2003, the association of Italian Highly Cited Scientists.
In the next few years, with rapid adoption of smart phones that allow end-users any-time, any-where Internet access, mobile data traffic is expected to increase exponentially. The wireless service providers face two main challenges as they address this new trend: (1) as the per-user throughput requirements scale to multi-Mbps, how to scale the networks to achieve dramatic improvements in wireless access and system capacity, and (2) in the face of declining ARPU and increasing competition, how to reduce cost of deploying and operating the network.
This talk represents our attempt to peer into the crystal ball and predict how cognitive radio technologies, specifically Spatio-temporal demand tracking, Dynamic Spectrum Access (DSA), energy management and Self-X will help meet these challenges and usher in a new transformation in cellular networks. We discuss in detail technologies in the following key areas: (1) DSA for capacity augmentation in macro-cells, (2) Ultra-broadband small and femto cells using spectrum white spaces, (3) Self-X (X=configure, monitor, diagnose, repair and optimize) for LTE networks, and (4) Energy management. We show that the application of cognitive radio ideas to infrastructure cellular networks can bring great benefits by achieving a balance between complexity, practical realizability, performance gains and true market potential.
Harnessing the Power and Promise of Distributed Software and Systems
We live in a connected world - an interdependent web of economic, social and political entities. Actions of one potentially affect the behaviors of all. Whether you believe that information technology helped spawn this connected and interdependent world or merely co-evolved with it, it is clear that modern software and systems engineering has been profoundly impacted. This conference is dedicated to the development of software and systems that harness the power in this web rather than fall victim to it.
The ubiquitous spread of the Internet and the World Wide Web has had a twofold impact on software and systems engineering: it has impacted the way enterprises interact with their customers and partners, and it has changed the way they develop software applications. Among the more common approaches that are being used are service-oriented architectures, web services, SAAS, cloud computing, P2P, grids, JBI and SCA. These approaches leverage the distributed nature of the systems to make them more flexible, adaptable, and better suited to users needs. In contrast to the tightly-coupled designs of older technologies, modern software and systems deliver capability through a kaleidoscope of loosely-coupled elements riding on infrastructure. Often times project management is also a distributed process. Not only are teams geographically spread, but also a team must cooperate with members outside their own organization in order to successfully complete a project. Software and systems producers can no longer act in total isolation - they must themselves become dependent on component or service providers, on VARs, and even on proactive customers.
Co-organized by TELECOM ParisTech, CS Communication and Systems, and the Genie Logiciel quarterly, the 22nd edition of the ICSSEA Conference (International Conference on Software and Systems Engineering and their Applications) will be held in Paris on December 7-9, 2010. By gathering actors from across the enterprise and research worlds, it aims at providing a critical survey of the status of tools, methods, and processes for elaborating software & systems. Lectures and discussions will be conducted with the issues of coupling and interoperability in distributed software and systems as the leitmotiv.
Le groupe de travail Methodes Formelles dans le Developpement Logiciel (MFDL) organise une journee de rencontre a Paris.
Cette journee a pour objectif de permettre aux doctorants et post-doctorants de presenter leurs travaux de recherche et de permettre aux equipes impliquees dans le groupe MFDL de montrer leurs problematiques actuelles de recherche a travers leur implication dans des projets.
Programme: http://membres-liglab.imag.fr/idani/MFDL/programme.html
IP networks today are very large, complex systems running a wide variety of applications and services with diverse and evolving performance needs. Network management is therefore constantly being thrust into new realms, and must evolve to support such a wide range of complex services at massive scales. In this talk I shall first overview the types of problems and challenges that ISP-scale network operators have to address. I shall then present recent work in 2 specific areas of network management:
Understanding Emerging Traffic Trends -
Recent research analyzing worldwide Internet traffic indicates that HTTP ((Hypertext Transport Protocol) accounts for a majority of residential broadband traffic by volume. Originally developed for human-initiated client-server communications launched from web browsers running on traditional computers and laptops, today HTTP today has become the protocol of choice for a wide range of applications from a diverse array of emerging devices like smart TVs and gaming consoles. Here I shall present our study of these new sources of HTTP traffic for residential broadband Internet users.
Resource Management -
In an UMTS (Universal Mobile Telecommunications System) 3G network, one of the most popular 3G mobile communication technologies, a key factor affecting application performance and network resource usage is the Radio Resource Control (RRC) state machine. The purpose of the state machine is to efficiently manage limited radio resources and to conserve the handset battery life. I will explore the impact of operational state machine settings, illustrate inefficiencies caused by the interplay between smartphone applications and the state machine behavior, and explore techniques to improve performance and resource usage.
Dr. Subhabrata Sen is a Principal Member of Technical Staff in the Networking & Services Research Laboratory at AT&T Labs-Research. He received a Bachelor of Engineering (First Class with Honors) degree in Computer Science from Jadavpur University, India, and M.S. and Ph.D. degrees in Computer Science from the University of Massachusetts, Amherst. His research interests span IP network management and include configuration management, network measurements, network data mining, traffic analysis, and network and application performance.
Dr. Sen has published more than 70 research articles and owns 8 issued patents. He received the AT&T CTO Innovation Award in 2008. He is a member of the IEEE and ACM and his web page is http://www.research.att.com/ sen.
In this work, we analyze the design of green routing algorithms and evaluate the achievable energy savings that such mechanisms could allow in several realistic network scenarios. We formulate the problem as a minimum energy routing optimization, which we numerically solve considering a core-network scenario, which can be seen as a worst-case for energy saving performance (as nodes cannot be switched off). To gather full-relief results, we analyze the energy savings in various conditions (i.e., network topology and traffic matrix) and under different technology assumptions (i.e., the energy profile of the network devices).
These results give us insight into the potential benefits of different “green” technologies and their interactions. In particular, we show that depending on the topology and traffic matrices, the optimal energy savings can be modest, partly limiting the interest for green routing approaches for some scenarios. At the same time, we also show that the common belief that there is a trade off between green network optimization and performance does not necessarily hold: in the considered environment, green routing has no effect on the main network performances such as maximum link utilization.
Internet revolutionne chaque jour un peu plus l’informatique distribuee et les applications multimedia. La toile est un acteur fondamental de l’economie mondiale, dans un environnent emergent ou les technologies des reseaux radio de quatrieme generation (4G, WiMAX) amplifient le paradigme du always on. Dans ce contexte la securite est souvent citee comme un pre-requis et un enjeu (financier) majeur. Cependant la realite semble tout autre: bien que la cryptographie ait produite des algorithmes solides et que des preuves formelles existent sur des protocoles tels que SSL, l’internaute utilisent de (trop) nombreux mots de passe et surfe a l’aide d’ordinateurs demunis de securite physique. Par ailleurs une societe connectee en permanence, amplifie l’effet big brother, c’est-a-dire la contradiction entre le respect de la vie privee (privacy) et la tracabilite induite par les technologies radio enfouies. Dans cet expose nous presenterons quelques resultats significatifs ainsi que le modele de securite que nous avons bati, reposant sur une pile a trois niveaux: securite des acces, des VPN, et des applications. L’originalite de cette approche est de s’appuyer sur une architecture collaborative entre des terminaux ou des serveurs informatiques muni de grande puissance de traitement mais non securises, et des modules de securite (des cartes a puce) garantissant une securite forte mais offrant de faibles capacites de calcul. C’est une recherche pragmatique, nous collaborons etroitement avec l’industrie mais egalement avec les organismes de normalisation tels que l’IETF. EtherTrust, est une spin-off (fondee en 2007) issue de ces travaux, qui a pour objectif de commercialiser des solutions, des materiels et des logiciels dedies a la securite d’un monde convergent tout IP. Sa principale originalite est de s’appuyer sur une technologie carte a puce, dont les principaux acteurs industriels sont Europeens et plus particulierement Francais.
In this presentation, we present an analytical solution to carry out performance analysis of various frequency reuse schemes in an OFDMA based cellular network (like WiMAX or LTE networks). We study the performance in downlink in terms of signal to interference (SIR) ratio and cellular capacity. Analytical models are proposed for integer frequency reuse (IFR), fractional frequency reuse (FFR) and two level power control (TLPC) schemes. These models are based on a fluid model originally proposed for CDMA networks. The modeling key of this approach is to consider the discrete base stations entities as a continuum. To validate our approach, Monte Carlo simulations are carried out. Results of validation study show that results obtained through our analytical method are in conformity with those obtained through simulations. However, compared to time consuming simulations, our model is very time efficient. We also present a comparison between above three frequency reuse schemes.
Guillaume Valadon est post-doctorant au sein de l’equipe Complex Networks ( http://complexnetworks.fr/ ) au LIP6. Durant sa these, effectuee entre le Japon et la France, il a travaille sur la mobilite au sens large; notamment MANET, Mobile IPv6, ainsi que la securite de ces protocoles (voir http://valadon.complexnetworks.fr/ ).
Lors de cette presentation, il abordera deux ameliorations de Mobile IPv6, l’une pratique l’autre theorique. Ces deux approches complementaires compatibles avec l’infrastructure actuelle de l’Internet, permettent de gerer la mobilite de facon transparente a la fois pour le reseau et les peripheriques fixes.
Dans une seconde partie, il discutera de travaux en cours portant sur la dynamique de la topologie d’Internet, et l’analyse du trafic d’un serveur eDonkey. Plus particulierement, il sera aborde la mesure de la topologie de l’Internet depuis une source vers un ensemble de destinations ainsi que l’evolution de cette topologie au cours du temps. En ce qui concerne eDonkey, des resultats preliminaires visant a identifier les echanges de fichiers pedophiles seront discutes.
Chaque terminal est equipe aujourd’hui de plusieurs interfaces de technologies radio differentes. Il devient possible d’utiliser simultanement les differentes interfaces, et non seulement de basculer d’un reseau a un autre. Les terminaux sont en concurrence entre eux pour l’acces aux ressources de differents reseaux, chacun cherchant a satisfaire un optimal local (strategie dite egoiste). Cela releve un probleme d’optimisation multi-objectif qui peut etre etudie en utilisant la theorie des jeux. Dans notre modele, les terminaux tournent plusieurs applications en meme temps et peuvent associer chaque application a une interface specifique afin de maximiser leur fonction d’utilite. Nous montrons qu’avec un mecanisme de prix approprie, le systeme tend vers des equilibres qui realisent un optimal global. Enfin, nous presentons quelques perspectives de ce travail.
Minh Anh Tran est sorti de l’Ecole Polytechnique en 2004. Il a fait ensuite une these en Informatique et Reseaux dans l’equipe TREC de ENS Ulm. L’annee derniere, il a fait un postdoc a Stanford, et maintenant, il travaille avec Nadia Boukhatem dans le projet 3MING.
This talk focuses on how network software adapts to user needs, load variations and failures to provide reliable communications in largely unknown networks. For more details, see Steps toward self-aware networks , in Communications of the ACM, Volume 52, Issue 7, July 2009
Erol Gelenbe (FACM, FIEEE, FIEE) holds the Dennis Gabor Chair at Imperial College. He has made decisive contributions to product form networks by inventing G-networks (Gelenbe-Networks) with totally new types of negative customers, triggers, and resets and which are characterised by non-linear traffic equations. He has made seminal contributions to random access communications, the optimisation of reliability in database systems, the design of adaptive QoS-aware packet networks, diffusion models in performance analysis, and the performance of link control protocols.
La plupart des modeles de calculs qu’ils soient derives de la machine de Turing (automates finis, a pile, a compteurs, etc.) ou non (systemes de reecriture, reseaux de Petri, etc.) ont pour objet de distinguer parmi les sequences de symboles, suites d’actions, etc. celles qui sont acceptees, ou correctes, ou valides, de celles qui ne le sont pas. Les automates avec multiplicite (weighted automata en anglais) permettent d’associer a chaque sequence un coefficient, element de l’univers qu’on aura choisi pour modeliser le phenomene etudie, et qui donne plus d’informations sur la sequence qu’un simple 0-1. Cela ouvre le champ a l’etude de systemes quantitatifs.
Les possibilites de modelisation sont multipliees a l’infini, mais l’etude de ces automates avec multiplicite est naturellement plus complexe. Ces dernieres annees, j’ai developpe avec mes etudiants des techniques dites structurelles qui permettent d’analyser au dela d’un resultat les calculs qui menent a ce resultat. Dans cet expose, j’en presenterai un apercu avec la preuve du resultat suivant:
Si deux langages rationnels (ie acceptes par un automate fini) L et K ont la meme fonction generatrice, c’est-a-dire que pour chaque entier n il y a le meme nombre de mots de longueur n dans L et dans K, il existe un transducteur fini lettre-a-lettre (un automate fini dont toutes les transitions sont etiquetees par un couple de lettres) qui realise une bijection entre L et K.
Cet enonce, qui resout une conjecture obscure de la theorie confidentielle des structures automatiques, est une consequence d’un raffinement de la decidabilite de l’equivalence de deux automates finis avec multiplicite dans N et en l’occurrence le pretexte a sa presentation: deux N-automates sont equivalents si, et seulement si, ils sont conjugues, par des matrices a coefficients dans N, a un meme troisieme et de l’interpretation de la conjugaison comme une succession de revetements et de co-revetements realises par fusion et eclatement des etats.
Tout ceci est tire d’un travail en collaboration avec Marie-Pierre Beal et Sylvain Lombardy, de l’universite Paris-Est, Marne-la-Vallee.
In recent years, there has been a rapid growth in deployment and usage of realtime network applications, such as Voice-over-IP, video calls/video conferencing, live network seminars, and networked gaming. At the same time, wireless networking technologies have become increasingly popular with a wide array of devices such as laptop computers, Personal Digital Assistants (PDAs), and cellular phones being sold with built-in WiFi and WiMAX interfaces. For realtime applications to be popular over wireless networks, simple, robust and effective QoS mechanisms suited for a variety of heterogeneous wireless networks must be devised.
To provide guaranteed QoS, an access network should limit load using an admission control algorithm. In this research, we propose a method to provide effective admission control for variable bit rate realtime flows, based on the Central Limit Theorem. Our objective is to estimate the percentage of packets that will be delayed beyond a predefined delay threshold, based on the mean and variance of all the flows in the system. Any flow that will increase the percentage of delayed packets beyond an acceptable threshold can then be rejected. Using simulations we have shown that the proposed method provides a very effective control of the total system load, guaranteeing the QoS for a set of accepted flows with negligible reductions in the system throughput.
We also propose a method to determine the delay-dependent "value" of a packet based on the QoS requirements of the flow. Using this value in scheduling is shown to increase the number of packets sent before a predetermined deadline. We propose a measure of fairness in scheduling that is calculated according to how well each flow’s QoS requirements are met. We then introduce a novel scheduling paradigm, Delay Loss Controlled-Earliest Deadline First (DLC-EDF), which is shown to provide better QoS for all flows compared to other scheduling mechanisms studied.
In information theory, one is mostly interested in the maximum cardinality of a set of strings of a common length n with the property that any two (or more) of the strings from the set differ in some particular manner. This happens because the strings are codewords used to transmit information through a noisy device so that the output sequences are distorted versions of the inputs, yet we need to tell them apart in order to recover, at the receiving end, the information encoded by different input strings.
In the more combinatorial zero-error problems, the pairwise difference relation between code strings can be expressed in terms of a graph. The vertices of the graph are the symbols from the input alphabet of the physical channel, and adjacent vertices correspond to symbols which cannot be confused at the receiving end. In 1956 Shannon asked for the determination of the maximum number of strings of length n such that any two of them differ in some coordinate in a pair of adjacent vertices of the graph G. The exponential asymptotics of this number, as n goes to infinity, is the capacity of the graph.
From an abstract point of view, a difference relation for strings is a pairwise relation whose main feature is that of being irreflexive. A further characteristic of a difference relation is that if two strings have a projection onto a subset of their coordinates that are in this relation as strings, then already this guarantees that the pair of strings are in the same relation. We will refer to this property as local verifiability. It is easy to see how any difference relation leads to a concept of capacity.
We will give several examples of problems in extremal combinatorics that we can solve within this framework, using information theoretic intuition and methods.
Janos Korner was born in Budapest, Hungary, on November 30, 1946. He received the degree in mathematics at the Eotvos University, Budapest, Hungary, in 1970.
After graduation, he joined the Mathematical Institute of the Hungarian Academy of Sciences, Budapest, where he worked until he left Hungary, in 1989. From 1981 to 1983, he was on leave at AT&T Bell Laboratories, Murray Hill, NJ. At present, he is a Professor in the Department of Computer Science at the University of Rome 1 "La Sapienza", Rome, Italy.
With Imre Csiszar, he is the author of the book "Information Theory: Coding Theorems for Discrete Memoryless Systems". His main research interests are in combinatorics, information theory, and their interplay. Prof. Korner served as Associate Editor for Shannon Theory for IEEE TRANSACTIONS ON INFORMATION THEORY from 1983 to 1986.
Les reseaux de capteurs sont des reseaux particuliers car, au dela des considerations de performance, leur objectif premier est generalement d’assurer la surveillance (en termes de temperature, de pression, de
presence de polluants, de presence d’individus,...) d’une zone geographique le plus longtemps possible. En effet, ces reseaux sont composes de nombreux elements de faible capacite individuelle, fonctionnant grace a des sources d’energie limitees ou rechargeables a un rythme lent. Or, ces capteurs etant a priori les seuls elements deployes, ils doivent assurer le bon fonctionnement du reseau, et notamment la retransmission en mode ad hoc des trames.
Dans cet expose, nous commercerons par detailler ces contraintes particulieres et nous reviendrons plus particulierement sur la notion de duree de vie du reseau qui admet de nombreuses definitions, liees aux applications. Puis nous exposerons plusieurs travaux relatifs notamment a l’acces au medium, au routage et a la combinaison de donnees dans ces reseaux.
Les reseaux de capteurs sans fil forment une nouvelle famille de systemes informatiques permettant d’observer le monde avec une resolution sans precedent. En particulier, ces systemes promettent de revolutionner le domaine de l’etude environnementale. Un tel reseau est compose d’un ensemble de capteurs sans fil, capables de collecter, traiter, et transmettre de l’information. Grace aux avancees dans les domaines de la microelectronique et des technologies sans fil, ces systemes sont a la fois peu volumineux et peu couteux. Ceci permet leurs deploiements dans differents types d’environnements, afin d’observer l’evolution dans le temps et l’espace de quantites physiques telles que la temperature, l’humidite, la lumiere ou le son.
Dans le domaine de l’etude environnementale, les systemes de prise de mesures doivent souvent fonctionner de maniere autonome pendant plusieurs mois ou plusieurs annees. Les capteurs sans fil ont cependant des ressources limitees, particulierement en terme d’energie. Les communications radios etant d’un ordre de grandeur plus couteuses en energie que l’utilisation du processeur, la conception de methodes de collecte de donnees limitant la transmission de donnees est devenue l’un des principaux defis souleves par cette technologie. Dans cet expose, nous montrerons que des solutions efficaces peuvent etre apportees par des methodes de prediction et de compression de donnees issues du domaine de l’apprentissage automatique.
Les reseaux de capteurs sont prevus pour etre deployes dans des conditions extremements variees, necessitant tantot de petits reseaux, tantot de tres grands reseaux (jusqu’a plusieurs milliers). Cela conduit a mettre au point des solutions efficaces pour scinder ces reseaux en clusteurs. Dans cette presentation, nous aborderons d’abord la problematique de la formation des clusteurs. Dans une premiere partie, nous exposerons nos travaux sur la validation d’un algorithme assez connu pour la formation des clusteurs: le MaxMin. Cet algorithme est tres souvent cite mais n’avait jamais ete valide.
Dans une deuxieme partie, nous presenterons un retour d’experiences menees sur une plate-forme de reseaux de capteurs. Nous montrerons comment ces reseaux souvent bases sur des algorithmes multiniveaux et tres dynamiques conduisent parfois a de serieux problemes de stabilite ou a des choix de routes inattendus. Nous aborderons le probleme des liens assymetriques dans les reseaux de capteurs.
After P2P file-sharing and VoIP telephony applications, VoD and live-streaming P2P applications have finally gained a large Internet audience as well. A first part of this talk is therefore devoted to an overview of the current state of art in the field of P2P-TV applications.
The reminder of the talk then focuses on the definition a framework for the comparison of P2P applications in general, based on the measurement and analysis of the traffic they generate.
In order for the framework to be descriptive for all P2P applications, we first define the observable of interest: such metrics either pertain to different layers of the protocol stack (from network up to the application), or convey cross-layer information (such as the degree of awareness, at overlay layer, of properties characterizing the underlying physical network).
The framework is compact (as it allows to represent all the above information at once), general (as is can be extended to consider metrics different from the one reported in this work), and flexible in both space and time (as it allows different levels of spatial aggregation, and also to represent the temporal evolution of the quantities of interest). Based on this framework, we analyze some of the most popular P2P application nowadays, highlighting their main similarities and differences.
Traditional video compression techniques have the implicit assumptions
that the final user characteristics and requirements are known and do not
change with time. Moreover, usually robustness is not taken into account
when designing a video coder. This approach is clearly unfit to the video
delivery over computer networks.
In order to improve the adaption between the encoder, the network and the
users, a departure from traditional approaches is needed. The basic idea
is to split the video representation into sub-streams; in particular one
can have a hierarchical organization of sub-streams or not. In the first
case, called Scalable Video Coding (SVC), each new sub-stream refines the
information provided by previous ones, but is useless if these are not
received. In the second case, called Multiple Description Coding, any set
of sub-streams is decodable, but the compression performances are
degraded with respect to SVC.
In this talk, after motivating the interest for SVC and MDC, a brief
recall about video coding techniques will be performed, followed by the
description of main scalability techniques implemented into standards. In
particular the trade-off between scalability, compression performance and
complexity will be explored. Some attention will be given to the important
"drift problem" for scalable video. Finally, a few words about
non-standard scalable and MDC techniques will be given as well.
Le codage reseau est une technique moderne, elegante et efficace de
transmission de donnees a travers un reseau. Il permet aux noeuds
intermediaires d’operer des combinaisons lineaires sur les paquets qu’ils
recoivent avant de les retransmettre. Afin d’augmenter la flexibilite du
protocole, les combinaisons lineaires sont choisies au hasard. Cette
technique, appelee codage reseau aleatoire, permet d’atteindre le debit
maximum theorique et d’assurer une grande robustesse face aux changements
de topologie du reseau.
Malgre ses nombreux avantages, le codage reseau aleatoire est tres
sensible aux erreurs pour deux raisons. Premierement, les erreurs ont de
nombreuses causes: pertes de paquets, liens ou noeuds imparfaits,
adversaire sur le reseau etc. Deuxiemement, les combinaisons lineaires
effectuees dans les noeuds intermediaires propagent les erreurs a travers
l’ensemble des paquets. Ainsi, les techniques de codage d’erreurs
classiques sont inadaptees au codage reseau aleatoire.
Dans ce seminaire, nous nous interessons aux classes de codes correcteurs
d’erreurs proposes a cet effet, c’est-a-dire les codes en metrique rang et
les codes de sous-espaces. Nous etudions particulierement la performance
des codes en metrique rang contre un adversaire injectant des paquets
maliceusement sur le reseau.
Il s’agit de decrire plus une demarche qu’un resultat. Le probleme initial est un probleme de design electronique pose par J. de Sousa : L’acheminement de combinaisons de signaux vers un traceur est tres couteux en terme de nombre de portes electroniques necessaires pour remplir cette fonction. Usuellement, on utilisera un ensemble de multiplexeurs. Apres un effort de formulations du probleme nous arrivons a une question d’architecture de routage de signaux. Puis il sera aise de passer a un probleme de graphe extremal, enfin une question de couplage dans un graphe biparti. On pourra deduire du resultat principal les grands theoremes de couplage etablis dans les annees 50 a 60. La question suivante qui reste ouverte, porte sur l’originalite du resultat qui amenera a regarder rapidement une question d’architecture de reseaux.
This talk focuses on KISS, a new addition to the well populated and flavored world of Internet classification engines. Motivated by the expected raise of UDP traffic volume, which stems from the momentum of P2P streaming applications,we propose a novel statistical payload-based classification framework, targeted to UDP traffic. Statistical signatures are automatically inferred from training data, by the means of a Chi-Square like test, which extracts the protocol "syntax", but ignores the protocol semantic and synchronization rules. The signatures feed a decision engine based on Support Vector Machines. KISS is very efficient, and its signatures are intrinsically robust to packet sampling, reordering, and flow asymmetry, so that is can be used on almost any network. KISS is tested in different scenarios, considering both data, VoIP, and traditional P2P Internet applications. Results are astonishing. The average True Positive percentage exceeds 99%, and than 0.05% of False positives are raised. But KISS is also proved to provide almost perfect results when facing new P2P streaming applications, such as Joost, PPLive, SopCast and TVants. Finally, we present how KISS can be extented to the TCP case and present preliminary results.
Securing wireless networks is notoriously a huge challenge. In this talk, we will first describe current trends in wireless technology and upcoming wireless networks, such as sensor, vehicular, mesh, and RFID networks. We will then address their vulnerabilities and the existing or envisioned protection techniques. We will then consider the specific example of ephemeral networks, namely networks in which the interactions between nodes are short-lived, typically due to their mobility. We will focus on the fundamental security operation of revocation in such networks. We will show how game theory can be used to model the different possible revocation strategies of the nodes and discuss the implications of this model on the protocol design.
Abstract: The talk will introduce a class of hierarchical games that arises in pricing of services in communication networks with a monopolistic service provider and a large population of users of different types. The probability distribution on different user types is common/public information, but the precise type of a specific user is not necessarily known to all parties. As such the game falls in the class of games with incomplete information, and in our specific case what we have is a problem of mechanism design within an uncertain environment and with asymmetric information. The service provider is a revenue maximizer, with his instrument being the prices charged (for bandwidth) as a function of the information available to him. The individual users are utility maximizers, with bandwidth usage being their decision variable. The congestion cost in their utility functions creates a coupling between different users objective functions, which leads to a non-cooperative game at the lower (users) level for which we adopt Nash or Bayesian equilibrium. Solutions to these problems (at both the lower and the upper levels) entail non-standard multi-level optimization problems. Indirect approaches to these optimization problems will be presented, and some asymptotics for large agent-population models will be discussed. (This is based on joint work with Hongxia Shen.)
Information!
FYI.
Migrating from my old PmWiki page was really a pain. Jekyll complained a lot about UTF-8 whatever messages. I had to debug through "iconv -f UTF-8 -t ISO-8859-1 _bibliography/faultybib.bib" and then manually inspect bibtex files with "yudit".
(I ended up having to remove most french, german and other funny accents). The next pass is to make sure that the scholar lexer does not get confused by broken entries (my previous PmWiki parser was less elegant but definitively less picky). Every bibfile contains 50 talks and I have 10 years worth of talks... The net results is that the conversion process did not ameliorate the quality of the abstracts, sorry!
Information!
Unfortunately the DSI migration to an LXC based solution (with no root access!) completely broke the seminar system. My whole poor-man system was built over BibTex/Web and Cron. Unfortunately the LXC PHP version conflicts with PmWiki and my previous system, so I migrated to Jekyll. Unfortuantely Jekyll scholar is very fragile and picky, so that typos, UTF-8 characters etc. made migration a mess! Unfortunately the LXC does not support anything (cron, svn, git) and does not allow any outbound connections. As such, as a bonus, the migration also broke the automated email annouce :( Despite the amount of effort I put into it so far, I haven't been able to find a fix. Having run this service for 10 years now, I am happy to let it run by any volounteer from now on -- please raise your hand!