Page transition in process; If you wish to contact us please use the mailing list `ccnsim@listes.telecom-paristech.fr`
and refrain to send me emails in unicast, as my email loss probability is non zero and my email delay reply is anyway heavy-tailed :)
ccnSim is a scalable chunk-level simulator of Information and Content Centric Networks (ICN/CCN) that we make available as open-source software to promote cross-comparison in the scientific community. ccnSim is written in C++ under the Omnnet++ framework, and features three simulation engines:
A classic Event-Driven engine (available in all versions) allows to assess CCN performance in scenarios with large orders of magnitude for CCN content stores (up to 10^6 chunks) and Internet catalog sizes (up to 10^8 files) on off-the-shelf hardware (i.e, a PC with a fair amount of RAM). If you use ccnSim up to v0.3, we ask you to please acknowledge our work by citing [ICC-13] (thanks!)
ModelGraft, a new hybrid modeling/simulation engine (available starting from v0.4) that allows for unprecedented scalability: with respect to the (highly optimized) execution times of event driven simulation in v0.3, the new technique allow simulation of much larger networks catalogs and content stores on an exiguous amount of RAM and with over 100x reduction of simulation duration. If use ccnSim v0.4 or above, we ask you to please acknowledge our work by citing [COMNET-17a] (thanks!)
Finally, a novel parallel simulation engine that achieves 100x gain over ModelGraft and thus a 10000x gain over event-driven simulation! The new technique (referred to as CS-POST-MT in [JSAC-18]) proposes to, insted of slicing nodes of the network over multiple cores, to slice independent portions of the catalog over multiple cores. In contrast to network slicing, which would incur significant MPI overhead, the new technique exhibit an ideal speedup in the number of cores, which justify the above speedups). We have just released the code implementing the work in [JSAC-18], don’t hesitate to tell us what do you think!
Demos
You can check the how fast the new version of ccnSim runs when equipped with the the ModelGraft engine [ITC28a] vs the classic event-driven engine [ICC-13] on this YouTube video that we demonstrated at [ITC28a].
Now, imagine that the same comparison would apply to the parallel simulation engine [JSAC-18], which is orders of magnitude faster than ModelGraft! Overall, the parallel engine yield a speed up with respect to event driven on the order of 10000x, for a loss of accuracy of about 0.1% in our tests! (Another YouTube video will soon be available)
We’ve just (November ‘17) released the experimental version of ccnSim-v0.4-Parallel
We released the code on GitHub (fyi, no plans to support it as a docker so far)
The code implements the parallel CS-POST-MT technique described in the technical report [JSAC-18]. Have a look and let us know what do you think!
We’ve just (July ‘17) released the latest stable version of ccnSim-v0.4 as a docker image hosted on DockerHub!
This will spare the hassle of compiling and setting up the environment and allow you to quickly launch your first ccnSim-0.4 simulation so rush to https://hub.docker.com/r/nonsns/ccnsim-0.4/ !
Note: The container does not support the graphical interface. But, trust us, you do not need it anyway ;)
We’ve just (May ‘17) committed the latest stable version of ccnSim-v0.4 on GitHub!
This is exactly the same ccnSim-v0.4 version that you can dowload below, just hosted on GitHub
This new release does not change simulation API, but introduce new significant breakthrough [ITC28b][COMNET-17a][CCN-TR16] in the core simulation environment that allow for very significant memory reduction and CPU speedup.
ccnSim-v0.4 is still able to run classic Event Driven simulation (as in v0.3, but with significant reduction of the memory footprint due to the use of inversion rejection sampling)
ccnSim-v0.4 further allows to run a novel Monte Carlo simulation engine (new from v0.4, with significant reduction of the memory footprint and execution time)
The Event Driven and Monte Carlo engines can be seamlessly used, so v0.3 manual is still valid
Previous versions
Other versions are still available but their download is discouraged (the download counts is indicative, as it is frozen and no longer supported). To discourage downloads, links are not provided (you can do the same, and better with v0.4), but the files are still archived (it should not be impossible to guess with trial and error if you’re motivated).
As for the the former versions of ccnSim:
Really, there is no reason not to use v0.4 which is not only more complete, but also simpler, more modular and faster! Notice that v0.4 also improves the Event Driven engine, so you should consider moving it to this one (which, unless you have modified the code on your own, should be painless as they are fully interoperable)
We extensively benchmarked ccnSim performance and refactored its code, you can find an account of scalability properties of v0.3 in [ICC-13]. However, results [ITC28a] should convince you to migrate to v0.4!
Example scenarios to reproduce some of our latest publications [ICN-14a] and [ICN-14b] are also available in v0.4
I have troubles installing ccnSim with omnet++ 5.1 (and above)
Short answer: if you don’t want to modify ccnSim, then use the docker container; long answer: keep reading.
Unfortunately, this is due to changes in opp_makemake (from the omnet++ changelog: “Support for deep includes (automatically adding each subfolder to the include path) has been dropped, due to being error-prone and having limited usefulness. In projects that used this feature, #include directives need to be updated to include the directory as well.”). Fixing this issue is more involved though than just specifying the folders, since there are other modifications introduced by the 5.1 version of omnet++, among which send() and arrived() methods used to generate and process messages. Given that, aside these non-backward compatible changes, no change is relevant for ccnSim, we recommend you to use omnet++ v5.0
I have troubles installing omnet++ 4.1 with gcc-4.6
Short answer: Upgrade to ccnSim-v0.4! Long anwer: keep reading.
If you want to use older version, please refer to this page for a solution (thanks to Cesar A. Bernardini for pointing this out)
I have troubles running ccnSim with Tkenv
Short answer: (you) don’t (need it). Long answer: keep reading.
We are phasing out the support for the graphical interface. But, trust us, you do not need it anyway ;) If you want to install Tkenv, notice that your Tkenv environment should work if you properly installed ccnSim-0.4! If you use older version and when running in graphical mode, you encounter an error like the following:
Error in module (Client) abilene_network.client[8] (id=23) at event #117, t=5.872724860981:
You forgot to manually add a dup() function to class ccn_interest.
then the fix is simply the following: modify the files include/ccn_interest.h and include/ccn_data.h, replacing the line
virtual ccn_interest *dup() {return new ccn_interest(*this);}
with:
virtual ccn_interest *dup() const {return new ccn_interest(*this);}
However, we cannot provide support (neirther via email, phone, or avian carrier) over the graphical interface (sorry).
I have troubles unpacking the v0.3 archive
Short answer: don’t and use v0.4. Long answer: keep reading.
We are aware that some with some linux OS distributions, there may be troubles in unpacking the archive from the command line (though this is not deterministic with thedistribution version (?). Assuming to have a terminal opened in the same directory where the archive is stored, issue a tar xzvf ccnsim-0.3.tgz command. In case that fails, gunzip ccnsim-0.3.tgz; tar xvf ccnsim-0.3.tar should work for your system.
How do I simulate INFORM?
Short answer: don’t and use iNRR. Long answer: keep reading.
Unfortunately, we missed manpower to sync back [ICN-13] in the main ccnSim tree. Fortunately though, since v0.3 ccnSim implements Nearest Replica Routing (NRR) [ICN-14a], that is the best candidate for comparison. So follow suggestion in [QICN-14] to setup a sound comparison! and browse the online interactive demo presented at [ICN-14e] to get an idea why this answer should satisfy you.
How do I simulate a tree topology?
Since v0.2, tree topologies are included in the default release; additional tree-like topologies (e.g., redundant trees) are available in the companion script set. In case you’re still using v0.1, please notice that if you select single shortest path routing and a single repository on any real topology, this will actually induce a chunk-diffusion tree rooted at the repository (though the resulting tree will not be a ‘‘binary’’ tree).
How do I play with graph-related properties ?
Originally, we were computing graph related properties directly within ccnSim (the betweenness_centrality() function in ctopology.cc) at the beginning of the simulation. While the method is simplest for the user, it incurs in however a non marginal overhead, as it requires to compute the betweenness centrality of all nodes over and over, so that repeating simulation over the same topology eventually becomes a useless computational overhead. Additionally, betweenness centrality is just one metric, though there are others graph-related properties (e.g., ego-betweenness centrality, or the one we consider in [NOMEN-12], etc.) that could be considered as well, so that this approach was also limited in scope.
We henceforth decided to follow and approach is similar to the one we adopted to adapt the cache size in [NOMEN-12]: i.e., split computation that are related to topological properties that are static over the whole simulation (e.g., betweenness, etc.) from those that need to be taken frequently and possibly evolve over time (eg. routing and forwarding). This is done by specifying a betweenness value for all nodes in the .ini file, with instructions like:
the values of betweenness (of any other similar metrics) can be easily pre-computed with graph-related tools (e.g., such as socnetv. So this is more flexible (as any metric can fit) and less computationally intensive (as the computation is done once for any new scenario).
People (alphabetical order)
Andrea Araldo (former principal suspect)
Raffele Chiocchetti (former developer)
Emilio Leonardi (well informed outsider)
Dario Rossi (occasional debugger)
Giuseppe Rossini (former lead developer)
Michele Tortelli (\infty power user)
Acknowledgements (temporal order)
ccnSim development started in the context of the ANR Project Connect, continued thanks to funding from two EIT ICT Labs projects (Smart Ubiquitous Content and Virtual Data Plane for Sofware defined network) and is now living on NewNet@Paris funds
References
[PATENT-US10721295]
MP Enguehard, G Carofiglio, D Rossi,
"Popularity-based load-balancing for fog-cloud placement" , Patent US10721295
2020,
Patent
@misc{DR:PATENT-US10721295,
author = {MP Enguehard, G Carofiglio, D Rossi},
title = {Popularity-based load-balancing for fog-cloud placement},
howpublished = {Patent US10721295},
year = {2020}
}
@article{DR:JSAC-18,
title = {Parallel Simulation of Very Large-Scale General Cache Networks},
author = {Tortelli, Michele and Rossi, Dario and Leonardi, Emilio},
year = {2018},
journal = {IEEE Journal on Selected Areas in Communication (JSAC)},
volume = {36},
month = aug,
pages = {1871--1886},
howpublished = {https://nonsns.github.io/paper/rossi18jsac.pdf},
doi = {10.1109/JSAC.2018.2844938}
}
In this paper we propose a methodology for the study of general cache networks, which is intrinsically scalable and amenable to parallel execution. We contrast two techniques: one that slices the network, and another that slices the content catalog. In the former, each core simulates requests for the whole catalog on a subgraph of the original topology, whereas in the latter each core simulates requests for a portion of the original catalog on a replica of the whole network. Interestingly, we find out that when the number of cores increases (and so the split ratio of the network topology), the overhead of message passing required to keeping consistency among nodes actually offsets any benefit from the parallelization: this is strictly due to the correlation among neighboring caches, meaning that requests arriving at one cache allocated on one core may depend on the status of one or more caches allocated on different cores. Even more interestingly, we find out that the newly proposed catalog slicing, on the contrary, achieves an ideal speedup in the number of cores. Overall, our system, which we make available as open source software, enables performance assessment of large-scale general cache networks, i.e., comprising hundreds of nodes, trillions contents, and complex routing and caching algorithms, in minutes of CPU time and with exiguous amounts of memory.
@article{DR:TON-18,
title = {Caching Encrypted Content via Stochastic Cache Partitioning},
author = {Araldo, Andrea and Dan, Gyorgy and Rossi, Dario},
year = {2018},
volume = {26},
issue = {1},
doi = {10.1109/TNET.2018.2793892},
journal = {IEEE/ACM Transactions on Networking},
howpublished = {https://nonsns.github.io/paper/rossi18ton.pdf}
}
In-network caching is an appealing solution to cope with the increasing bandwidth demand of video, audio and data transfer over the Internet. Nonetheless, in order to protect consumer privacy and their own business, Content Providers (CPs) increasingly deliver encrypted content, thereby preventing Internet Service Providers (ISPs) from employing traditional caching strategies, which require the knowledge of the objects being transmitted. To overcome this emerging tussle between security and effi- ciency, in this paper we propose an architecture in which the ISP partitions the cache space into slices, assigns each slice to a different CP, and lets the CPs remotely manage their slices. This architecture enables transparent caching of encrypted content, and can be deployed in the very edge of the ISP’s network (i.e., base stations, femtocells), while allowing CPs to maintain exclusive control over their content. We propose an algorithm, called SDCP, for partitioning the cache storage into slices so as to maximize the bandwidth savings provided by the cache. A distinctive feature of our algorithm is that ISPs only need to measure the aggregated miss rates of each CP, but they need not know of the individual objects that are requested. We prove that the SDCP algorithm converges to a partitioning that is close to the optimal, and we bound its optimality gap. We use simulations to evaluate SDCP’s convergence rate under stationary and non-stationary content popularity. Finally, we show that SDCP significantly outperforms traditional reactive caching techniques, considering both CPs with perfect and with imperfect knowledge of their content popularity.
@techrep{DR:PARALLEL-CACHE-17,
title = {Parallel Simulation of Very Large-Scale General Cache Networks},
author = {Tortelli, Michele and Rossi, Dario and Leonardi, Emilio},
year = {2017},
month = nov,
institution = {Telecom ParisTech},
howpublished = {https://nonsns.github.io/paper/drossi17parallel-cache.pdf}
}
@article{DR:TMM-17,
title = {Dynamic Adaptive Video Streaming: Towards a systematic comparison of ICN and TCP/IP},
author = {Samain, Jacques and Carofiglio, Giovanna and Muscariello, Luca and Papalini, Michele and Sardara, Mauro and Tortelli, Michele and Rossi, Dario},
journal = {IEEE Transactions on Multimedia},
volume = {19},
issue = {10},
month = oct,
year = {2017},
doi = {10.1109/TMM.2017.2733340},
pages = {2166-2181},
topic = {qoe,icn,streaming},
howpublished = {https://nonsns.github.io/paper/rossi17tmm.pdf}
}
Streaming of video contents over the Internet is experiencing an unprecedented growth. While video permeates every application, it also puts tremendous pressure in the network – to support users having heterogeneous accesses and expecting high quality of experience, in a furthermore cost-effective manner. In this context, Future Internet (FI) paradigms, such as Information Centric Networking (ICN), are particularly well suited to not only enhance video delivery at the client (as in the DASH approach), but to also naturally and seamlessly extend video support deeper in the network functions. In this paper, we contrast ICN and TCP/IP with an experimental approach, where we employ several state-of-the-art DASH controllers (PANDA, AdapTech, and BOLA) on an ICN vs TCP/IP network stack. Our campaign, based on tools which we developed and made available as open-source software, includes multiple clients (homogeneous vs heterogeneous mixture, synchronous vs asynchronous arrivals), videos (up to 4K resolution), channels (e.g., DASH profiles, emulated WiFi and LTE, real 3G/4G traces), and levels of integration with an ICN network (i.e., vanilla NDN, wireless loss detection and recovery at the access point, load balancing). Our results clearly illustrate, as well as quantitatively assess, benefits of ICN-based streaming, warning about potential pitfalls that are however easy to avoid.
@article{DR:COMNET-17a,
title = {A Hybrid Methodology for the Performance Evaluation of Internet-scale Cache Networks},
author = {Tortelli, Michele and Rossi, Dario and Leonardi, Emilio},
year = {2017},
month = sep,
journal = {Elsevier Computer Networks},
pages = {146--159},
volume = {125},
howpublished = {https://nonsns.github.io/paper/rossi17comnet-a.pdf}
}
Two concurrent factors challenge the evaluation of large-scale cache networks: complex algorithmic interactions, which are hardly represented by analytical models, and catalog/network size, which limits the scalability of event-driven simulations. To solve these limitations, we propose a new hybrid technique, that we colloquially refer to as ModelGraft, which combines elements of stochastic analysis within a simulative Monte-Carlo approach. In ModelGraft, large scenarios are mapped to a downscaled counterpart built upon Time-To-Live (TTL) caches, to achieve CPU and memory scalability. Additionally, a feedback loop ensures convergence to a consistent state, whose performance accurately represent those of the original system. Finally, the technique also retains simulation simplicity and flexibility, as it can be seamlessly applied to numerous forwarding, meta-caching, and replacement algorithms. We implement and make ModelGraft available as an alternative simulation engine of ccnSim. Performance evaluation shows that, with respect to classic event-driven simulation, ModelGraft gains over two orders of magnitude in both CPU time and memory complexity, while limiting accuracy loss below 2%. Ultimately, ModelGraft pushes the boundaries of the performance evaluation well beyond the limits achieved in the current state of the art, enabling the study of Internet-scale scenarios with content catalogs comprising hundreds billions objects.
@article{DR:COMNET-17b,
title = {Exploiting Parallelism in Hierarchical Content Stores for High-speed ICN Routers},
author = {Mansilha, R. and Barcellos, M. and Leonardi, E. and Rossi, D.},
journal = {Elsevier Computer Networks},
month = sep,
year = {2017},
doi = {10.1016/j.comnet.2017.04.041},
pages = {132--145},
volume = {125},
howpublished = {https://nonsns.github.io/paper/rossi17comnet-b.pdf}
}
Information-centric network (ICN) is a novel architecture identifying data as a first class citizen, and caching as a prominent low-level feature. Yet, efficiently using large storage (e.g., 1 TB) at line rate (e.g., 10 Gbps) is not trivial: in our previous work, we proposed an ICN router design equipped with hierarchical caches, that exploits peculiarities of the ICN traffic arrival process. In this paper, we implement such proposal in the NDN Forwarding Daemon (NFD), and carry on a thorough experimental evaluation of its performance with an emulation methodology on common off the shelf hardware. Our study testifies the interest and feasibility of the approach.
[ITC28a]
Araldo, Andrea and Dan, Gyorgy and Rossi, Dario,
"Stochastic Dynamic Cache Partitioning for Encrypted Content Delivery"
ITC28, Runner-up for best paper award and receipient of the IEEE ComSoc/ISOC Internet Technical Committee Best paper award 2016-2017
sep.
2016,
Conference Award
@inproceedings{DR:ITC28a,
title = {Stochastic Dynamic Cache Partitioning for Encrypted Content Delivery},
author = {Araldo, Andrea and Dan, Gyorgy and Rossi, Dario},
year = {2016},
month = sep,
booktitle = {ITC28, Runner-up for best paper award and receipient of the IEEE ComSoc/ISOC Internet Technical Committee Best paper award 2016-2017},
topic = {icn,optimization,streaming},
note = {bestpaperaward},
howpublished = {https://nonsns.github.io/paper/rossi16itc28-a.pdf}
}
In-network caching is an appealing solution to cope with the increasing bandwidth demand of video, audio and data transfer over the Internet. Nonetheless, an increasing share of content delivery services adopt encryption through HTTPS, which is not compatible with traditional ISP-managed approaches like transparent and proxy caching. This raises the need for solutions involving both Internet Service Providers (ISP) and Content Providers (CP): by design, the solution should preserve business-critical CP information (e.g., content popularity, user preferences) on the one hand, while allowing for a deeper integration of caches in the ISP architecture (e.g., in 5G femto-cells) on the other hand. In this paper we address this issue by considering a contentoblivious ISP-operated cache. The ISP allocates the cache storage to various content providers so as to maximize the bandwidth savings provided by the cache: the main novelty lies in the fact that, to protect business-critical information, ISPs only need to measure the aggregated miss rates of the individual CPs and does not need to be aware of the objects that are requested, as in classic caching. We propose a cache allocation algorithm based on a perturbed stochastic subgradient method, and prove that the algorithm converges close to the allocation that maximizes the overall cache hit rate. We use extensive simulations to validate the algorithm and to assess its convergence rate under stationary and non-stationary content popularity. Our results (i) testify the feasibility of content-oblivious caches and (ii) show that the proposed algorithm can achieve within 10% from the global optimum in our evaluation.
@inproceedings{DR:ITC28b,
title = {ModelGraft: Accurate, Scalable, and Flexible Performance Evaluation of General Cache Networks},
author = {Tortelli, Michele and Rossi, Dario and Leonardi, Emilio},
year = {2016},
month = sep,
booktitle = {ITC28},
topic = {icn,modeling,scaling},
howpublished = {https://nonsns.github.io/paper/rossi16itc28-b.pdf}
}
Large scale deployments of general cache networks, such as Content Delivery Networks or Information Centric Networking architectures, arise new challenges regarding their performance evaluation for network planning. On the one hand, analytical models can hardly represent in details all the interactions of complex replacement, replication, and routing policies on arbitrary topologies. On the other hand, the sheer size of networks and content catalogs makes event-driven simulation techniques inherently non-scalable. We propose a new technique for the performance evaluation of large-scale caching systems that intelligently integrates elements of stochastic analysis within a MonteCarlo simulative approach, that we colloquially refer to as ModelGraft. Our approach (i) leverages the intuition that complex scenarios can be mapped to a simpler equivalent scenario that builds upon Time-To-Live (TTL) caches; it (ii) significantly downscales the scenario to lower computation and memory complexity, while, at the same time, preserving its properties to limit accuracy loss; finally, it (iii) is simple to use and robust, as it autonomously converges to a consistent state through a feedback-loop control system, regardless of the initial state. Performance evaluation shows that, with respect to classic event-driven simulation, ModelGraft gains over two orders of magnitude in both CPU time and memory complexity, while limiting accuracy loss below 2%. In addition, we show that ModelGraft extends performance evaluation well beyond the boundaries of classic approaches, by enabling study of Internet-scale scenarios with content catalogs comprising hundreds of billions objects.
@inproceedings{DR:NETWORKING-16,
title = {Representation Selection Problem: Optimizing Video Delivery through Caching},
author = {Araldo, Andrea and Martignon, Fabio and Rossi, Dario},
year = {2016},
month = may,
booktitle = {IFIP Networking},
pages = {323-331},
topic = {icn,optimization,streaming},
howpublished = {https://nonsns.github.io/paper/rossi16networking.pdf}
}
To cope with Internet video explosion, recent work proposes to deploy caches to absorb part of the traffic related to popular videos. Nonetheless, caching literature has mainly focused on network-centric metrics, while the quality of users’ video streaming experience should be the key performance index to optimize. Additionally, the general assumption is that each user request can be satisfied by a single object, which does not hold when multiple representations at different quality levels are available for the same video. Our contribution in this paper is to extend the classic object placement problem (which object to cache and where) by further considering the representation selection problem (i.e., which quality representation to cache), employing two methodologies to tackle this challenge. First, we employ a Mixed Integer Linear Programming (MILP) formulation to obtain the centralized optimal solution, as well as bounds to natural policies that are readily obtained as additional constraints of the MILP. Second, from the structure of the optimal solution, we learn guidelines that assist the design of distributed caching strategies: namely, we devise a simple yet effective distributed strategy that incrementally improves the quality of cached objects. Via simulation over large scale scenarios comprising up to hundred nodes and hundred million objects, we show our proposal to be effective in balancing user perceived utility vs bandwidth usage.
@inproceedings{DR:ICN-15,
title = {Hierarchical Content Stores in High-speed ICN Routers: Emulation and Prototype Implementation},
author = {{R. Mansilha, L. Saino, M. Barcellos, M. Gallo, E. Leonardi, D. Perino and D. Rossi}},
booktitle = {ACM SIGCOMM Conference on Information-Centric Networking (ICN'15)},
address = {San Francisco, CA},
month = sep,
year = {2015},
pages = {147-156},
topic = {system,icn},
howpublished = {https://nonsns.github.io/paper/rossi15icn.pdf}
}
Recent work motivates the design of Information-centric rou-ters that make use of hierarchies of memory to jointly scale in the size and speed of content stores. The present paper advances this understanding by (i) instantiating a general purpose two-layer packet-level caching system, (ii) investigating the solution design space via emulation, and (iii) introducing a proof-of-concept prototype. The emulation-based study reveals insights about the broad design space, the expected impact of workload, and gains due to multi-threaded execution. The full-blown system prototype experimentally confirms that, by exploiting both DRAM and SSD memory technologies, ICN routers can sustain cache operations in excess of 10Gbps running on off-the-shelf hardware.
@inproceedings{DR:GLOBECOM-14,
author = {},
title = {Cost-aware caching: optimizing cache provisioning and object placement in ICN},
booktitle = {IEEE Globecom},
address = {Austin, Texas},
month = dec,
year = {2014},
pages = {1108 - 1113},
howpublished = {https://nonsns.github.io/paper/rossi14globecom.pdf}
}
Caching is frequently used by Internet Service Providers as a viable technique to reduce the latency perceived by end users, while jointly offloading network traffic. While the cache hit-ratio is generally considered in the literature as the dominant performance metric for such type of systems, in this paper we argue that a critical missing piece has so far been neglected. Adopting a radically different perspective, in this paper we explicitly account for the cost of content retrieval, i.e. the cost associated to the external bandwidth needed by an ISP to retrieve the contents requested by its customers. Interestingly, we discover that classical cache provisioning techniques that maximize cache efficiency (i.e., the hit-ratio), lead to suboptimal solutions with higher overall cost. To show this mismatch, we propose two optimization models that either minimize the overall costs or maximize the hit-ratio, jointly providing cache sizing, object placement and path selection. We formulate a polynomial- time greedy algorithm to solve the two problems and analytically prove its optimality. We provide numerical results and show that significant cost savings are attainable via a cost-aware design.
@inproceedings{DR:ICN-14a,
title = {Design and Evaluation of Cost-aware Information Centric Routers},
author = {Araldo, Andrea and Rossi, Dario and Martignon, Fabio},
booktitle = {1st ACM SIGCOMM Conference on Information-Centric Networking (ICN-2014)},
address = {Paris, France},
month = sep,
year = {2014},
pages = {147-156},
howpublished = {https://nonsns.github.io/paper/rossi14icn-a.pdf}
}
Albeit an important goal of Information Centric Networking (ICNs) is traffic reduction, a perhaps even more important aspect follows from the above achievement: the reduction of ISP operational costs that comes as consequence of the reduced load on transit and provider links. Surprisingly, to date this crucial aspect has not been properly taken into account, neither in the architectural design, nor in the op- eration and management of ICN proposals. In this work, we instead design a distributed cost-aware scheme that explicitly considers the cost heterogeneity among different links. We contrast our scheme with both traditional cost-blind schemes and optimal results. We further propose an architectural design to let multiple schemes be interoper- able, and finally assess whether overlooking implementation details could hamper the practical relevance of our design. Numerical results show that our cost-aware scheme can yield significant cost savings, that are furthermore consistent over a wide range of scenarios.
@inproceedings{DR:ICN-14b,
title = {Coupling caching and forwarding: Benefits, analysis, and implementation},
author = {Rossini, Giuseppe and Rossi, Dario},
booktitle = {1st ACM SIGCOMM Conference on Information-Centric Networking (ICN-2014)},
address = {Paris, France},
month = sep,
year = {2014},
pages = {127-136},
howpublished = {https://nonsns.github.io/paper/rossi14icn-b.pdf}
}
A recent debate revolves around the usefulness of pervasive caching, i.e., adding caching capabilities to possibly every router of the future Internet. Recent research argues against it, on the ground that it provides only very limited gain with respect to the current CDN scenario, where caching only happens at the network edge. In this paper, we instead show that advantages of ubiquitous caching appear only when meta-caching (i.e., whether or not cache the incoming object) and forwarding (i.e., where to direct requests in case of cache miss) decisions are tightly coupled. Summarizing our contributions, we (i) show that gains can be obtained provided that ideal Nearest Replica Routing (iNRR) forwarding and Leave a Copy Down (LCD) meta-caching are jointly in use, (ii) model the iNRR forwarding policy, (iii) provide two alternative implementations that arbitrarily closely approximate iNRR behavior, and (iv) promote cross-comparison by making our code available to the community.
@inproceedings{DR:ICN-14c,
title = {Analyzing Cacheable Traffic in ISP Access Networks for Micro CDN applications via Content-Centric Networking},
author = {Imbrenda, Claudio and Muscariello, Luca and Rossi, Dario},
booktitle = {1st ACM SIGCOMM Conference on Information-Centric Networking (ICN-2014)},
address = {Paris, France},
month = sep,
year = {2014},
pages = {57-66},
howpublished = {https://nonsns.github.io/paper/rossi14icn-c.pdf}
}
Web content coming from outside the ISP is today skyrocketing, causing significant additional infrastructure costs to network operators. The reduced marginal revenues left to ISPs, whose business is almost entirely based on declining flat rate subscriptions, call for significant innovation within the network infrastructure, to support new service delivery. In this paper, we suggest the use of micro CDNs in ISP access and back-haul networks to reduce redundant web traffic within the ISP infrastructure while improving user’s QoS. With micro CDN we refer to a content delivery system composed of (i) a high speed caching substrate, (ii) a content based routing protocol and (iii) a set of data transfer mechanisms made available by content-centric networking. The contribution of this paper is twofold. First, we extensively analyze more than one month of web traffic via continuous monitoring between the access and back-haul network of Orange in France. Second, we characterize key properties of monitored traffic, such as content popularity and request cacheability, to infer potential traffic reduction enabled by the introduction of micro CDNs. Based on these findings, we then perform micro CDN dimensioning in terms of memory requirements and provide guidelines on design choices
@inproceedings{DR:ICN-14d,
title = {Analyzing Cacheability in the Access Network with HACkSAw},
author = {Imbrenda, Claudio and Muscariello, Luca and Rossi, Dario},
booktitle = {1st ACM SIGCOMM Conference on Information-Centric Networking (ICN-2014), Demo Session},
address = {Paris, France},
month = sep,
year = {2014},
pages = {201-202},
howpublished = {https://nonsns.github.io/paper/rossi14icn-d.pdf}
}
Web traffic is growing, and the need for accurate traces of HTTP traffic is therefore also rising, both for operators and researchers, as accurate HTTP traffic traces allow to analyse and characterize the traffic and the clients, and to analyse the performance of the network and the perceived quality of service for the final users. Since most ICN proposals also advocate for pervasive caching, it is imperative to measure the cacheability of traffic to assess the impact and/or the potential benefits of such solutions. This demonstration will show a both a tool to collect HTTP traces that is both fast and accurate and that overcomes the limitations of existing tools, and a set of important statistics that can be computed in post processing, like aggregate/demultiplexed cacheability figures.
[ICN-14e]
Tortelli, Michele and Rossi, Dario and Boggia, Gennaro and Grieco, Luigi Alfredo,
"CCN Simulators: Analysis and Cross-Comparison"
1st ACM SIGCOMM Conference on Information-Centric Networking (ICN-2014), Demo Session
sep.
2014,
Conference
@inproceedings{DR:ICN-14e,
title = {CCN Simulators: Analysis and Cross-Comparison},
author = {Tortelli, Michele and Rossi, Dario and Boggia, Gennaro and Grieco, Luigi Alfredo},
booktitle = {1st ACM SIGCOMM Conference on Information-Centric Networking (ICN-2014), Demo Session},
address = {Paris, France},
month = sep,
year = {2014},
pages = {197-198},
howpublished = {https://nonsns.github.io/paper/rossi14icn-e.pdf}
}
@inproceedings{DR:QICN-14,
title = {Pedestrian Crossing: The Long and Winding Road toward Fair Cross-comparison of ICN Quality},
author = {Tortelli, Michele and Rossi, Dario and Boggia, Gennaro and Grieco, Luigi Alfredo},
booktitle = {International Workshop on Quality, Reliability, and Security in Information-Centric Networking (Q-ICN)},
address = {Rhodes, Greece},
month = aug,
year = {2014},
howpublished = {https://nonsns.github.io/paper/rossi14qicn.pdf}
}
While numerous Information Centric Networking (ICN) architectures have been proposed over the last years, the community has so far only timidly attempted at a quantitative assessment of the relative quality of service level that users are expected to enjoy in each of them. This paper starts a journey toward the cross comparison of ICN alternatives, making several contributions along this road. Specifically, a census of 20 ICN software tools reveals that about 10 are dedicated to a specific architecture, about half of which are simulators. Second, we survey ICN research papers using simulation to gather information concerning the used simulator, finding that a large fraction either uses custom proprietary and unavailable software, or even plainly fails to mention any information on this regard, which is deceiving. Third, we cross-compare some of the available simulators, finding that they achieve consistent results, which is instead encouraging. Fourth, we propose a methodology to increase and promote cross- comparison, which is within reach but requires community-wide agreement, promotion and enforcement.
[PATENT-US10530893B2]
Rossi, D. and Rossini, G.,
"Method for managing packets in a network of Information Centric Networking (ICN) nodes" , Patent EPO14305866.7, US10530893B2
2014,
Patent
@misc{DR:PATENT-US10530893B2,
author = {Rossi, D. and Rossini, G.},
howpublished = {Patent EPO14305866.7, US10530893B2},
title = {Method for managing packets in a network of Information Centric Networking (ICN) nodes},
year = {2014}
}
[PATENT-EP2940950B1]
Rossini, Giuseppe and Rossi, Dario and Garetto, Michele and Leonardi, Emilio,
"Information Centric Networking (ICN) router" , Patent EPO14305639.8, WO EP US JP EP2940950B1
2014,
Patent
@misc{DR:PATENT-EP2940950B1,
author = {Rossini, Giuseppe and Rossi, Dario and Garetto, Michele and Leonardi, Emilio},
howpublished = {Patent EPO14305639.8, WO EP US JP EP2940950B1},
title = {Information Centric Networking (ICN) router},
year = {2014}
}
[ICC-13]
Chiocchetti, Raffaele and Rossi, Dario and Rossini, Giuseppe,
"ccnSim: an Highly Scalable CCN Simulator"
IEEE International Conference on Communications (ICC)
jun.
2013,
Conference
@inproceedings{DR:ICC-13,
title = {{ccnSim}: an Highly Scalable CCN Simulator},
author = {Chiocchetti, Raffaele and Rossi, Dario and Rossini, Giuseppe},
booktitle = {IEEE International Conference on Communications (ICC)},
year = {2013},
month = jun,
howpublished = {https://nonsns.github.io/paper/rossi13icc.pdf}
}
Research interest about Information Centric Networking (ICN) has grown at a very fast pace over the last few years, especially after the 2009 seminal paper of Van Jacobson et al. describing a Content Centric Network (CCN) architecture. While significant research effort has been produced in terms of architectures, algorithms, and models, the scientific community currently lacks common tools and scenarios to allow a fair cross- comparison among the different proposals. The situation is particularly complex as the commonly used general-purpose simulators cannot cope with the expected system scale: thus, many proposals are currently evaluated over small and unrealistic scale, especially in terms of dominant factors like catalog and cache sizes. As such, there is need of a scalable tool under which different algorithms can be tested and compared. Over the last years, we have developed and optimized ccnSim, an highly scalable chunk-level simulator especially suitable for the analysis of caching performance of CCN network. In this paper, we briefly describe the tool, and present an extensive benchmark of its performance. To give an idea of ccnSim scalability, a common off-the-shelf PC equipped with 8GB of RAM memory is able to simulate 2-hours of a 50-nodes CCN network, where each nodes is equipped with 10 GB caches, serving a 1 PB catalog in about 20 min CPU time.
@inproceedings{DR:ICN-13,
title = {{INFORM: a Dynamic Interest Forwarding Mechanism for Information Centric Networking}},
author = {},
booktitle = {ACM SIGCOMM Worskhop on Information-Centric Networking (ICN)},
year = {2013},
address = {Hong Kong, China},
howpublished = {https://nonsns.github.io/paper/rossi13icn.pdf}
}
Information Centric Networking is a new communication paradigm where network primitives are based on named-data rather than host identifiers. In ICN, data retrieval is triggered by user requests which are forwarded towards a copy of the desired content item. Data can be retrieved either from a server that permanently provides a content item, or from a temporary item copy opportunistically cached by an in-network node. As the availability of cached items dynamically varies over time, the request forwarding scheme should be adapted accordingly. In this paper we focus on dynamic request forwarding in ICN, and develop an approach, inspired by Q-routing framework, that we show to outperform algorithms currently available in the state of the art.
[PATENT-EP2835942B1]
Perino, D. and Carofiglio, G. and Rossi, D. and Rossini, G.,
"Dynamic Interest Forwarding Mechanism for Information Centric Networking" , Patent EPO13306124.2, EP2835942B1
.
2013,
Patent
@misc{DR:PATENT-EP2835942B1,
author = {Perino, D. and Carofiglio, G. and Rossi, D. and Rossini, G.},
title = {Dynamic Interest Forwarding Mechanism for Information Centric Networking},
howpublished = {Patent EPO13306124.2, EP2835942B1},
address = {},
year = {2013},
month = {},
volume = {},
number = {},
pages = {},
annote = {}
}
[PATENT-EP2785014A1]
Perino, D. and Carofiglio, G. and Chiocchetti, R. and Rossi, D. and Rossini, G.,
"Device and method for organizing forwarding information in nodes of a content centric networking" , Patent EPO13161714.4, EP2785014A1
2013,
Patent
@misc{DR:PATENT-EP2785014A1,
author = {Perino, D. and Carofiglio, G. and Chiocchetti, R. and Rossi, D. and Rossini, G.},
title = {Device and method for organizing forwarding information in nodes of a content centric networking},
howpublished = {Patent EPO13161714.4, EP2785014A1},
year = {2013}
}
@inproceedings{DR:CAMAD-12,
title = {A dive into the caching performance of Content Centric Networking},
author = {G. Rossini, D. Rossi},
booktitle = {IEEE 17th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD'12)},
year = {2012},
month = sep,
pages = {105-109},
howpublished = {https://nonsns.github.io/paper/rossi12camad.pdf}
}
Content Centric Networking (CCN) is a promising architecture for the diffusion of popular content over the Internet. While CCN system design is sound, gathering a reliable estimate of its performance in the current Internet is challenging, due to the large scale and to the lack of agreement in some critical elements of the evaluation scenario. In this work, we add a number of important pieces to the CCN puzzle by means of a chunk-level simulator that we make available to the scientific community as open source software. First, we pay special attention to the locality of the user request process, as it may be determined by user interest or language barrier. Second, we consider the existence of possibly multiple repositories for the same content, as in the current Internet, along with different CCN interest forwarding policies, exploiting either a single or multiple repositories in parallel. To widen the relevance of our findings, we consider multi- ple topologies, content popularity settings, caching replacement policies and CCN forwarding strategies. Summarizing our main result, though the use of multiple content repositories can be beneficial from the user point of view, it may however counter part of the benefits if the CCN strategy layer implements naive interest forwarding policies.
@inproceedings{DR:ALGOTEL-12,
author = {Rossini, G. and Rossi, D.},
title = {Large scale simulation of CCN networks},
booktitle = {Algotel 2012},
year = {2012},
address = {La Grande Motte, France},
month = may,
howpublished = {https://nonsns.github.io/paper/rossi12algotel.pdf}
}
This work addresses the performance evaluation of Content Centric Networks (CCN). Focusing on a realistic YouTube-like catalog, we conduct a very thorough simulation study of the main system performance, consider several ingredients such as network topology, multi-path routing, content popularity, caching decisions and replacement policies. Summarizing our main results, we gather that (i) the impact of the topology is limited, (ii) multi-path routing may play against CCN efficiency, (iii) simple randomized policies perform almost as well as more complex ones, (iv) catalog and popularity settings play by far the most crucial role above all. Hopefully, our thorough assessment of scenario parameters can assist and promote the cross-comparison in the research community – for which we also provide our CCN simulator as open source software.
@inproceedings{DR:NOMEN-12,
author = {Rossi, D. and Rossini, G.},
title = {On sizing CCN content stores by exploiting topological information},
booktitle = {IEEE INFOCOM, NOMEN Worshop,},
year = {2012},
address = {Orlando, FL},
pages = {280-285},
month = mar,
howpublished = {https://nonsns.github.io/paper/rossi12nomen.pdf}
}
In this work, we study the caching performance of Content Centric Networking (CCN), with special emphasis on the size of individual CCN router caches. Specifically, we consider several graph-related centrality metrics (e.g., betweenness, closeness, stress, graph, eccentricity and degree centralities) to allocate content store space heterogeneously across the CCN network, and contrast the performance to that of an homogeneous allocation. To gather relevant results, we study CCN caching performance under large cache sizes (individual content stores of 10 GB), realistic topologies (up to 60 nodes), a YouTube-like Internet catalog (108 files for 1PB video data). A thorough simulation campaign allow us to conclude that (i) , the gain brought by content store size heterogeneity is very limited, and that (ii) the simplest metric, namely degree centrality, already proves to be a “sufficiently good” allocation criterion. On the one hand, this implies rather simple rules of thumb for the content store sizing (e.g., “if you add a line card to a CCN router, add some content store space as well”). On the other hand, we point out that technological constraints, such as line- speed operation requirement, may however limit the applicability of degree-based content store allocation.
@inproceedings{DR:ICN-12,
author = {Chiocchetti, Raffaele and Rossi, Dario and Rossini, Giuseppe and Carofiglio, Giovanna and Diego Perino},
title = {Exploit the known or explore the unknown: Hamlet-like doubts in ICN},
booktitle = {ACM SIGCOMM, ICN Workshop,},
year = {2012},
howpublished = {https://nonsns.github.io/paper/rossi12icn.pdf}
}
@article{DR:COMCOM-13,
author = {Rossini, Giuseppe and Rossi, Dario},
title = {Evaluating CCN multi-path interest forwarding strategies},
journal = {Elsevier Computer Communication, SI on Information Centric Networking,},
month = {Avril},
volume = {36},
issue = {7},
pages = {771-778},
year = {2013},
howpublished = {https://nonsns.github.io/paper/rossi13comcom.pdf}
}
This work addresses the performance evaluation of Content Centric Networks (CCN). Focusing on a realistic YouTube-like catalog, we conduct a thorough simulation study of the main system performance, with a special focus on multi-path interest forwarding strategies but thoroughly analyzing the impact of several other ingredients – such as network topology, content popularity, caching decisions and replacement policies. Summarizing our main results, (i) catalog and popularity settings play by far the most crucial role (ii) the impact of the strategy layer comes next, with naive forwarding strategies playing against CCN efficiency, (iii) simple randomized caching policies perform almost as well as more complex ones, (iv) the impact of the topology is limited. Hopefully, our thorough assessment of scenario parameters can assist and promote the cross-comparison in the research community – for which we also provide our CCN simulator as open source software.