Page transition in process; If you wish to contact us please use the mailing list `ccnsim@listes.telecom-paristech.fr` and refrain to send me emails in unicast, as my email loss probability is non zero and my email delay reply is anyway heavy-tailed :)
Quick links: Demos / Download / Manual and scenarios / FAQ / People / Acks / Publications
ccnSim is a scalable chunk-level simulator of Information and Content Centric Networks (ICN/CCN) that we make available as open-source software to promote cross-comparison in the scientific community. ccnSim is written in C++ under the Omnnet++ framework, and features three simulation engines:
A classic Event-Driven engine (available in all versions) allows to assess CCN performance in scenarios with large orders of magnitude for CCN content stores (up to 10^6 chunks) and Internet catalog sizes (up to 10^8 files) on off-the-shelf hardware (i.e, a PC with a fair amount of RAM). If you use ccnSim up to v0.3, we ask you to please acknowledge our work by citing [ICC-13] (thanks!)
ModelGraft, a new hybrid modeling/simulation engine (available starting from v0.4) that allows for unprecedented scalability: with respect to the (highly optimized) execution times of event driven simulation in v0.3, the new technique allow simulation of much larger networks catalogs and content stores on an exiguous amount of RAM and with over 100x reduction of simulation duration. If use ccnSim v0.4 or above, we ask you to please acknowledge our work by citing [COMNET-17a] (thanks!)
Finally, a novel parallel simulation engine that achieves 100x gain over ModelGraft and thus a 10000x gain over event-driven simulation! The new technique (referred to as CS-POST-MT in [JSAC-18]) proposes to, insted of slicing nodes of the network over multiple cores, to slice independent portions of the catalog over multiple cores. In contrast to network slicing, which would incur significant MPI overhead, the new technique exhibit an ideal speedup in the number of cores, which justify the above speedups). We have just released the code implementing the work in [JSAC-18], don’t hesitate to tell us what do you think!
You can check the how fast the new version of ccnSim runs when equipped with the the ModelGraft engine [ITC28a] vs the classic event-driven engine [ICC-13] on this YouTube video that we demonstrated at [ITC28a].
Now, imagine that the same comparison would apply to the parallel simulation engine [JSAC-18], which is orders of magnitude faster than ModelGraft! Overall, the parallel engine yield a speed up with respect to event driven on the order of 10000x, for a loss of accuracy of about 0.1% in our tests! (Another YouTube video will soon be available)
add a pointer to the ICN-14 demo
Version | Source | Downloads | Manual | Scenarios |
---|---|---|---|---|
0.4-Parallel (11/2017) | GitHub repository https://github.com/TeamRossi/ccnSim-0.4-Parallel | How do you count downloads on Git? | v0.4-Parallel manual | Scenarios to replicate [JSAC-18] included in the GitHub repository |
0.4 (05/2017) | GitHub repository https://github.com/TeamRossi/ccnSim-0.4 or DockerHub image https://hub.docker.com/r/nonsns/ccnsim-0.4/ | How do you count downloads on Git? | v0.4 manual | Scenarios to replicate [COMNET-17a] included in the GitHub repository |
Other versions are still available but their download is discouraged (the download counts is indicative, as it is frozen and no longer supported). To discourage downloads, links are not provided (you can do the same, and better with v0.4), but the files are still archived (it should not be impossible to guess with trial and error if you’re motivated).
As for the the former versions of ccnSim:
Version | Source | Downloads | Manual | Scenarios |
---|---|---|---|---|
0.4 (05/2017) | ccnSim-0.4.tgz | 166 | v0.4 manual | Scenarios to replicate [COMNET-17a] included in the GitHub repository |
0.4alpha2 (02/2016) | ccnSim-0.4alpha2.tgz | 377 | Please have a look at [COMNET-17a] and v0.3 manual | Scenarios [COMNET-17a] included in the archive |
0.4alpha (12/2015) | ccnSim-0.4alpha.tgz | 238 | Please have a look at [COMNET-17a] and v0.3 manual | Scenarios [COMNET-17a] included in the archive |
0.3 (10/2014) | ccnsim-0.3.tgz | 1084 | Please have a look at [COMNET-17a] and v0.3 manual | NRR scripts to replicate [ICN-14a], and Cost-Aware scripts to replicate [ICN-14b] (never counted) [NRR](ccnSim:) |
0.2 (09/2013) | ccnsim-0.2.tgz | 747 | v0.2 manual | NRR scripts to replicate [ICN-14a] (554 downloads) |
0.1 (03/2012) | ccnsim-0.1.zip | 1340 | v0.1 manual |
I have troubles installing ccnSim with omnet++ 5.1 (and above)
Short answer: if you don’t want to modify ccnSim, then use the docker container; long answer: keep reading.
Unfortunately, this is due to changes in opp_makemake
(from the omnet++ changelog: “Support for deep includes (automatically adding each subfolder to the include path) has been dropped, due to being error-prone and having limited usefulness. In projects that used this feature, #include directives need to be updated to include the directory as well.”). Fixing this issue is more involved though than just specifying the folders, since there are other modifications introduced by the 5.1 version of omnet++, among which send()
and arrived()
methods used to generate and process messages. Given that, aside these non-backward compatible changes, no change is relevant for ccnSim, we recommend you to use omnet++ v5.0
I have troubles installing omnet++ 4.1 with gcc-4.6
Short answer: Upgrade to ccnSim-v0.4! Long anwer: keep reading.
If you want to use older version, please refer to this page for a solution (thanks to Cesar A. Bernardini for pointing this out)
I have troubles running ccnSim with Tkenv
Short answer: (you) don’t (need it). Long answer: keep reading.
We are phasing out the support for the graphical interface. But, trust us, you do not need it anyway ;) If you want to install Tkenv, notice that your Tkenv environment should work if you properly installed ccnSim-0.4! If you use older version and when running in graphical mode, you encounter an error like the following:
Error in module (Client) abilene_network.client[8] (id=23) at event #117, t=5.872724860981:
You forgot to manually add a dup() function to class ccn_interest.
then the fix is simply the following: modify the files include/ccn_interest.h
and include/ccn_data.h
, replacing the line
virtual ccn_interest *dup() {return new ccn_interest(*this);}
with:
virtual ccn_interest *dup() const {return new ccn_interest(*this);}
However, we cannot provide support (neirther via email, phone, or avian carrier) over the graphical interface (sorry).
I have troubles unpacking the v0.3 archive
Short answer: don’t and use v0.4. Long answer: keep reading.
We are aware that some with some linux OS distributions, there may be troubles in unpacking the archive from the command line (though this is not deterministic with thedistribution version (?). Assuming to have a terminal opened in the same directory where the archive is stored, issue a tar xzvf ccnsim-0.3.tgz
command. In case that fails, gunzip ccnsim-0.3.tgz; tar xvf ccnsim-0.3.tar
should work for your system.
How do I simulate INFORM?
Short answer: don’t and use iNRR. Long answer: keep reading.
Unfortunately, we missed manpower to sync back [ICN-13] in the main ccnSim tree. Fortunately though, since v0.3 ccnSim implements Nearest Replica Routing (NRR) [ICN-14a], that is the best candidate for comparison. So follow suggestion in [QICN-14] to setup a sound comparison! and browse the online interactive demo presented at [ICN-14e] to get an idea why this answer should satisfy you.
How do I simulate a tree topology?
Since v0.2, tree topologies are included in the default release; additional tree-like topologies (e.g., redundant trees) are available in the companion script set. In case you’re still using v0.1, please notice that if you select single shortest path routing and a single repository on any real topology, this will actually induce a chunk-diffusion tree rooted at the repository (though the resulting tree will not be a ‘‘binary’’ tree).
How do I play with graph-related properties ?
Originally, we were computing graph related properties directly within ccnSim (the betweenness_centrality()
function in ctopology.cc
) at the beginning of the simulation. While the method is simplest for the user, it incurs in however a non marginal overhead, as it requires to compute the betweenness centrality of all nodes over and over, so that repeating simulation over the same topology eventually becomes a useless computational overhead. Additionally, betweenness centrality is just one metric, though there are others graph-related properties (e.g., ego-betweenness centrality, or the one we consider in [NOMEN-12], etc.) that could be considered as well, so that this approach was also limited in scope.
We henceforth decided to follow and approach is similar to the one we adopted to adapt the cache size in [NOMEN-12]: i.e., split computation that are related to topological properties that are static over the whole simulation (e.g., betweenness, etc.) from those that need to be taken frequently and possibly evolve over time (eg. routing and forwarding). This is done by specifying a betweenness
value for all nodes in the .ini
file, with instructions like:
**node[0].betweenness = 1
**node[1].betweenness = 0.4
[...]
the values of betweenness (of any other similar metrics) can be easily pre-computed with graph-related tools (e.g., such as socnetv. So this is more flexible (as any metric can fit) and less computationally intensive (as the computation is done once for any new scenario).