high security laboratory - network telescope ... · rtn ° 9999. 4 beck & boeglin & festor...
Post on 15-May-2020
4 Views
Preview:
TRANSCRIPT
HAL Id: inria-00538922https://hal.inria.fr/inria-00538922
Submitted on 23 Nov 2010
HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.
High Security Laboratory - Network TelescopeInfrastructure Upgrade
Frédéric Beck, Alexandre Boeglin, Olivier Festor
To cite this version:Frédéric Beck, Alexandre Boeglin, Olivier Festor. High Security Laboratory - Network TelescopeInfrastructure Upgrade. [Technical Report] 2010, pp.20. �inria-00538922�
Theme COM
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
High Security Laboratory - Network TelescopeInfrastructure Upgrade
Frederic Beck, Alexandre Boeglin and Olivier Festor
N° 9999
March 2007
Unite de recherche INRIA LorraineLORIA, Technopole de Nancy-Brabois, Campus scientifique,
615, rue du Jardin Botanique, BP 101, 54602 Villers-Les-Nancy (France)
High Security Laboratory - Network Telescope
Infrastructure Upgrade
Frederic Beck, Alexandre Boeglin and Olivier Festor
Theme COM — Systemes communicantsProjet MADYNES
Rapport technique n° 9999 — March 2007 — 17 pages
Abstract:
Key-words: security, network, telescope, malware
Laboratoire de Haute Securite en Informatique -
Telescope Reseau
Resume :
Mots-cles : securite, reseau, telescope, malware
LHS - Network Telescope Upgrade 3
Contents
1 Physical Infrastructure 5
1.1 Dedicated room . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Hardware changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Racks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Network connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Operating systems upgrade 9
2.1 Xen hypervisor and Dom0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2 VMWare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 Virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Telescope upgrade 10
3.1 Surfnet IDS 3.05 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2 Honeypots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.1 Nepenthes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2.2 Dionaea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2.3 Kippo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2.4 Amun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.3.1 TCPDump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.3.2 Netflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 leurrecom.org . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Experiments 14
4.1 Tor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.2 Peer-to-peer monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.3 VoIP honeypots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.4 SSH honeypot with University of Luxembourg . . . . . . . . . . . . . . . . . . 15
5 Future work 16
6 Conclusion 17
RT n° 9999
4 Beck & Boeglin & Festor
Introduction
As part of the High Security Laboratory at INRIA Nancy Grand Est inaugurated in July2010, we have been running and maintaining a network telescope for more than 2 years.Many updates and upgrades of the different components have been made during this period,as well as the apparition of new threats and vulnerabilities, motivating an upgrade of theexisting infrastructure to maintain it up-to-date with the current security issues.
This report is a follow up of the previous report written in May 2008 describing thespecification and deployment of the initial infrastructure. In this report, we present theupgrade performed during the second half of the year 2010, after the inauguration andmoving of the platform.
INRIA
LHS - Network Telescope Upgrade 5
1 Physical Infrastructure
1.1 Dedicated room
A new room dedicated to the High Security Laboratory (LHS) has been built in the base-ment. It is composed of a servers room, one open space, and one room holding the securityand access control terminal.
Enhanced physical security and strict access control have been implemented. All doorsand windows are armored and bulletproof, and an alarm system is monitoring all accesses orbreak-ins. In the servers room, a presence and movement sensor is detecting any unwantedentrance. Finally, a noise sensor raises an alarm if the noise level threshold is exceeded (e.g.if one drops a front panel or try to move one server). To prevent false-positive alarms, it isrecommended to disable the alarm in the servers room before doing any maintenance task,and reset it when exiting the room.
To access any of these rooms, one must go through an airlock. To pass the first door, oneneeds to present a smart card to the card reader and one finger to the vein reader. Then,once the main door is shut, using the smart card, one can gain access to the open space orthe servers room. Once in the servers room, one can access the security terminal room witha smart card and a retina control.
This last room holds the security and access control terminal, where one can configureaccess control rules (add or modify users) and check all logs. This room also contains theswitchgear cubicle and has a separated AC split that makes possible to add another rack inthe room if required. Access to this room should be strictly limited to authorized staff.
The servers room has been designed to hold 4 42 units racks. At the moment, all 4 slotsare used by 2 42U racks for the in-vitro analysis cluster, and 2 24U racks for the networktelescope. The racks for the telescope are plugged to 2 separate and redundant electricalcircuits. Each rack has 2 Power Distribution Units (PDU) plugged to separate circuits.Each server also has redundant power blocks plugged to separate PDUs and thus separatecircuits, ensuring full electrical redundancy.
4 AC cupboards have been placed in the room. 2 of them are plugged on the institute ACsystem (water-air), and two of them are on a separate air-air system. By default, the onesplugged on the institute system are running. When maintenance operations are performedon the system, make sure to switch on the splitted ones. Defaults constraints of 21°C and35% of air humidity are set. If the temperature raises over 26°C in the room, the electricalinput is switched off to ensure hardware integrity. As far as we know, there are no alarmmechanisms that are used to prevent brutal shutdown of the servers when the black outoccurs. This is a point that must be dug and maybe implemented to let the administratorssafely shut the servers and devices down before the blackout occurs.
RT n° 9999
6 Beck & Boeglin & Festor
1.2 Hardware changes
As part of the leurrecom.org project 1, we added one 1U server to the telescope. It is aDell PowerEdge R410 server with Intel Xeon Six-Core L5640 2.26GHz CPU, 24GB RAM, 2SATA HDD of 500GB each configured in RAID 1.
Details about the installation and the project are described in section 3.4.
1.3 Racks
In this section we present the physical implementation of the servers and devices in thedifferent racks.
1http://www.leurrecom.org/
INRIA
LHS - Network Telescope Upgrade 7
Figure 1: Collect Environment (left rack)
Unit Device KVM Gb1 Gb2 Gb3 Gb4
24 Cisco 2821 router Ge0/0 Ge0/023 torgnol Orange SDSL meowth-122
21Cisco 2960 switch meowth-9 meowth-10
meowth zubat-13 dialga-Gb12019 Dell PowerEdge 2950
10 meowth-8 arcanine-8 Neufbox18 mew17 Dell PowerEdge 2950
9 meowth-7 arcanine-716 togepi15 Dell PowerEdge 2950
8 meowth-6 arcanine-614 onix13 PDU12 Dell PowerEdge 2950
7 meowth-5 arcanine-511 charmander10 Dell PowerEdge 2950
6 meowth-4 arcanine-49 squirtle8 Dell PowerEdge 2950
5 meowth-3 arcanine-3 Freebox7 bulbasaur6 Dell PowerEdge 2950
4 meowth-2 arcanine-2Orange
5 psyduck ADSL Pro4
3Cisco 3560 switch arcanine-24
arcanine zubat-22 PDU1 Dell KVM 2161DS-2
RT n° 9999
8 Beck & Boeglin & Festor
Figure 2: Experimentation Environment (right rack)
Unit Device KVM Gb1 Gb2
24Cisco 2960 switch zubat-1 zubat-2
zubat arbok-0 arcanine-242322
Dell PowerEdge 2950 nidoran 13 zubat-7 zubat-182120
Dell PowerEdge 2950 geodude 12 zubat-6 zubat-171918
Dell PowerEdge 2950 mankey 11 zubat-5 zubat-16171615 PDU14
Dell PowerEdge 2950 jigglypuff 3 zubat-5 zubat-15131211
Dell PowerEdge 2950 pikachu 1 zubat-4 zubat-14109
8Cisco ASA firewall arbok-0 arbok-1
arbok zubat-1 dialga-Gb276
Dell PowerEdge 2950 dialga 2 meowth-10 arbok-1543
Dell PowerVault MD1000 + PDU at the back in slot 2dialga
21
All ports for switch meowth that are not listed in these tables are bound to VLAN 185(Internet).
1.4 Network connections
No changes have been made on the 3 ADSL connections. We still have Free, Neuf/SFR andOrange ADSL Pro.
However, we upgraded the bandwidth of the Orange SDSL connection to the maximumallowed by the phone line, which is 2Mb/s, after the global upgrade, as we were reachingthe limits of the previous 1Mb/s contract.
INRIA
LHS - Network Telescope Upgrade 9
2 Operating systems upgrade
2.1 Xen hypervisor and Dom0
We began the Dom0 operating system upgrade with the experimentation environment serverspikachu, jigglypuff and geodude. We upgraded the Debian Linux distribution, as well as theXen hypervisor and utils to version 4.0. The upgrade went fine and all the VMs are runningwithout problem.
However, one problem appeared on jigglypuff and geodude. When running Xen 4.0hypervisor, it is not possible to start the X.org server, as there is a know bug with this Xserver version and Xen 4.0.
Thus, we did not upgrade any other servers, as X may be required on some of them forsome experiments, e.g. running some tools such as wireshark to analyze traffic on bridgexenbr0 in Dom0.
2.2 VMWare
For some of the experiments that may run on the experimentation environment, full hard-ware virtualization/emulation may be required (e.g. VoIP honeypots or bots that do notwork very well with fake/dummy sound cards). To make this possible, we installed aVMWare Player framework on the server geodude running Ubuntu 10.04.
This also allows to easily and quickly run operating systems that are more difficult to getrunning on Xen, such as Windows, BSD or some Linux distributions (e.g. CentOS). Readyto deploy images are available at http://vmplanet.net/ or http://vmware.pouf.org/.
In order to enable a better control of these VMs, an upgrade of the solution to VMWarevSphere Hypervisor can be made 2.
2.3 Virtual machines
After the upgrade, we planned to update all VMs as well. However, due to the bug with Xserver, we postponed this task. We kept the basis of our VMs (kernel 2.6.18) and simplyupdated the Debian OS within these VMs, as these guests work fine with both Xen 3.X and4.0 versions.
We generated a new reference VM based on Debian SID, instantiated and stored ondialga, which we used as reference for the soft upgrade.
2http://www.vmware.com/fr/products/vsphere-hypervisor/
RT n° 9999
10 Beck & Boeglin & Festor
3 Telescope upgrade
3.1 Surfnet IDS 3.05
The first step was to upgrade the database server itself to match the requirements of SurfnetIDS 3.05.
apt-get install postgresql
pg_dropcluster 8.3 main --stop
pg_upgradecluster 8.1 main
Then, we installed the logging server as detailed in http://ids.surfnet.nl/wiki/doku.
php?id=latest_docs:1_logging_server:1._installation.By following the information at http://ids.surfnet.nl/wiki/doku.php?id=latest_docs:
1_logging_server:2._configuration, we configured the logserver. It important to no activatethe option c minifield enable, even if the comments state that it will boost the web interface,because many javascripts are missing. Moreover, one must get the Geolite data manuallyat http://www.maxmind.com/app/geolitecity and put the GeoLiteCity.dat file manually in/opt/surfnetids/include as the automatic script does not work well.
We obtained a working logserver with no data loss, supporting new honeypots, and thusallowing more variety in terms of honeypots and vulnerabilities.
3.2 Honeypots
To register the new sensors, we modified the localsensor.pl script from the tunnelserver, andcreated the script update sensors.pl. This script updates the sensors table with the newinformation and adds records in the newly created sensor details table. Thus, we are notcreating new sensors but reusing the same identifiers than before, we only update the sensorname, the honeypot type being set when logging attacks.
One important change here is that the sensors are now considered permanent, and theflags in the Sensors Status tab in the web interface can not be used anymore to detectsensors failures. To check if a sensor is running, you must now enter the Sensor Details tabby clicking on the sensor name, and check the field Last keepalive.
3.2.1 Nepenthes
Nepenthes is the sole honeypot that was used in the first deployment of the telescope. Theproject moved to http://nepenthes.carnivore.it/, and is not active anymore, as it has beenreplaced by Dionaea3.2.2.
We kept 14 instances of Nepenthes running on the server mew.
3.2.2 Dionaea
Dionaea 3 is meant to be a nepenthes successor, embedding python as scripting language,using libemu to detect shellcodes, supporting ipv6 and tls.
3http://dionaea.carnivore.it/
INRIA
LHS - Network Telescope Upgrade 11
By following the instructions at http://dionaea.carnivore.it/#compiling, we installedand configured Dionaea in a new VM called dionaea-reference. All utility scripts we hadpreviously written for Nepenthes have been updated to support dionaea:
dionaea-clone.pl allows to clone the VMs
dionaea-alive.pl updates the sensors details table’s timestamp to tell the logserver thatthe sensor is still alive, and restarts the honeypot if required
dionaea-scp.pl using the new upload user and keys, uploads the captured binaries andbinary streams to dialga
We first deployed instances of Dionaea on all servers in the collect environment (excludingmew) and all ADSL connections. However, we were saturating the network connectionand downloading 18 000 malwares per day, most of them redundantly, on the sensors. Tofavor the diversity of the collected data and emulated vulnerabilities, we decided to limitthe deployment to ADSL connections (to keep the opportunity to analyze the differencesbetween them) and 12 instances on servers psyduck, bulbasaur and squirtle, on the SDSLconnection.
3.2.3 Kippo
Kippo 4 is a medium interaction SSH honeypot designed to log brute force attacks and,most importantly, the entire shell interaction performed by the attacker. Kippo is inspired,but not based on Kojoney.
By basing us on the instructions at http://ids.surfnet.nl/wiki/doku.php?id=kb:
installing_kippo, we created and configured a new reference VM for kippo called kippo-reference. The main differences with the documentation are the addition of a kippo userand group that is used to run kippo as a non-root system user via the init.d script. Thisuser is not allowed to log into the system, its sole role is to run kippo.
As kippo is emulating an SSH server, it tries to bind the port 22. But as it is runningas a non-root user it fails. Thus, kippo is configured to bind the port 2222, and the trafficis redirected to this port via Netfilter REDIRECT target, that ensures that the source IPis not altered or modified, with the command:
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 22 -j REDIRECT --to-port 2222
Of course, the real SSH server running on the VM is not binding on port 22, but binds theport TCP 2220. Once again, all the utility scripts have been written for this new honeypot.The only difference with the usual ones is that, for stability issues, kippo is restarted every30 minutes automatically. The logserver’s database has been updated to support Kippo, aswell as the web interface.
At the moment, kippo does not support non-root logins. However, several passwordshave been set:
4http://code.goggle.com/p/kippo/
RT n° 9999
12 Beck & Boeglin & Festor
� one in the configuration file which is the same than the hostname of the emulated host
� several obvious passwords have been insert in a DBM located at/opt/kippo/data/pass.db with the tool /opt/kippo/utils/passdb.py
3.2.4 Amun
Amun 5 is another honeypot similar to Nepenthes and Dionaea. In order to diversify thedetection methods and vulnerabilities, we deployed 14 instances on the server onix by follow-ing the documentation at http://ids.surfnet.nl/wiki/doku.php?id=latest_docs:2_tunnel_
server:1d._amun.We wrote all support scripts and configured the honeypot to log into the
logserver. The amun-clone.pl script also modifies the sensor IP in the configuration file/opt/amunhoney/conf/log-surfnet.conf file in the VM.
3.3 Traces
3.3.1 TCPDump
No major changes have been made. We are still capturing traffic on all network bridges tothe Internet (xenbr0) in xen Dom0, but the upload of the traces is done with the uploaduser.
3.3.2 Netflow
Netflow probes are running on various servers to monitor the flows going through the networkbridge to the Internet (xenbr0). The following table presents all the probes:
Figure 3: Netflow probes in the telescopeSource Destination Destination Port
psyduck dialga - 10.1.1.1 9556bulbasaur dialga - 10.1.1.1 555squirtle dialga - 10.1.1.1 9557
charmander dialga - 10.1.1.1 9558onix dialga - 10.1.1.1 9559togepi dialga - 10.1.1.1 9560mew dialga - 10.1.1.1 9561
Tor VM dialga - 10.1.1.1 9562geodude dialga - 10.1.1.1 9563
5http://amunhoney.sourceforge.net/
INRIA
LHS - Network Telescope Upgrade 13
Firewall rules are set in arbok and dialga to allow these flow exports, however, even ifrules are still present for the tor monitoring VM, as it is not running anymore, no flows areexported.
These flows are still collected using nfsen, whose interface can be accessed at http://
dialga/nfsen/nfsen.php.
3.4 leurrecom.org
The leurrecom.org project aims at getting a more realistic picture of the attacks happeningon the Internet as well as their root causes (i.e. organized crime vs script kiddies) usingunbiased quantitative data. For more than 3 years now, a worldwide distributed set ofidentical honeypots has been deployed in many different countries. All their tcpdump filesare centralized in a database that all partners have access to for free. As of today, around50 instances are running in almost 30 different countries covering the 5 continents.
Our contacts in the project are Marc Dacier and Corrado Leita. We followed the instruc-tion that were provided by them to perform the installation with a custom Linux distributionbased on a Fedora Core 7.
However, as our hardware was very recent, some devices were not well supported by theexisting drivers. The SATA optical drive in the server was not recognized, and we had touse an external DVD drive to perform the installation, after trying to generate a USB stick,which failed because of hard-coded steps in the installation process.
Once the installation finalized, neither of the network cards (integrated Broadcom NetX-trem II 5716 or additional Intel Pro 1000 ET 82576) were supported. We had to install theappropriate driver for one of them. We decided to install manually the igb 6 driver (version2.3.4) for the Intel Pro 1000 card. We installed via USB stick all the dependencies andconfigured the system to recognize the card as eth0 and eth1 instead of eth2 and eth3 byediting /etc/modprobe.conf and /etc/udev/rules.d/60-net.rules.
All the operation and maintenance tasks are performed on the centralized logging serverof the project by the leurrecom.org team.
6http://e1000.sf.net
RT n° 9999
14 Beck & Boeglin & Festor
4 Experiments
4.1 Tor
We have been running a Tor relay router on nidoran for several months (migrated frommankey). Due to several abuse reports from CERT organizations of various countries (wewere the output router in the anonimization process of attackers), we shut that VM down.It is still stored on the server, and can be started for one shot experiments.
During its run, we collected 195GB of tcpdump data (only the headers, not the data)and 5.8GB of network flows. We need now to study and analyze all these data.
It is possible to obtain a list of all Tor relays at a given timestamp via Tor’s official datacollection project 7. It contains Tor relays descriptors archives since May 2004, and bridgedescriptors archives, statistics, and information about the performances of the network.
Alongside to this data collection project, a tool called Ernie allows to generate graphsand save all types of archive data about the Tor network, such as relay descriptors into apostgresql 8.3 database. This could be very helpful when analyzing the collected data aboutthe Tor network, or when correlating this information with the telescope itself.
4.2 Peer-to-peer monitoring
In the scope of the MAPE project, we have 2 VMs, mape-manager and mape-manager-juan, running on geodude. These VMs are used by Thibault Cholez for P2P monitoringand measurements in the KAD network. When performing large scale experiments, theSDSL bandwidth as well as the disk space on the VMs (20GB by default) were saturated.Therefore, we were using a dedicated computer plugged on the Free ADSL connection.
Another VM, bittorrent-monitoring-01, is running on geodude and is used by Juan PabloTimpanaro for the same kind of experiments on the bittorrent network. We may face thesame kind of problem than for KAD when performing large scale experiments.
For all these VMs, the postgresql servers running are accepting up to 350 simultaneousconnections. To start the server, kernel parameters SHMMIN, SHMMAX and SHMALLmust be set to the values specified in /etc/sysctl.conf. To make sure that these VMs havesufficient memory and CPU, we boosted these parameters (4 VCPUs and 1GB of memoryinstead of A VCPU and 512MB of memory).
4.3 VoIP honeypots
In the scope of the VAMPIRE ANR Project, we have deployed several VoIP honeypots onthe nidoran server used by Laurent Andrey:
voip-honeypot-01 Debian Linux
voip-honeypot-02 CentOS 4.8 for Orange SIP honeyd based honeypot
7http://metrics.torproject.org/data.html
INRIA
LHS - Network Telescope Upgrade 15
voip-honeypot-03 Debian Linux for artemisa deployment
voip-honeypot-04 Debian Linux running Dionaea with SIP emulation module activated
All logs and outputs are stored on dialga at /data/users/vampire. The vampire user onthe server is in charge of getting the logs via rsync from the different honeypots.
4.4 SSH honeypot with University of Luxembourg
A custom SSH honeypot, hali, was deployed as a cooperation with the University of Lux-embourg. It was an SSH tunnel between a Debian VM running on pikachu and the actualhoneypot in Luxembourg. This honeypot is no longer running, but the VM is still storedon pikachu if required.
RT n° 9999
16 Beck & Boeglin & Festor
5 Future work
Even if the upgrade has been successfully performed, there are still some points that needsome investigation.
First of all, we have collected and are still collecting lots of data, of many different kinds(pcap traces, network flows, attacks logs...), but we did not analyze them rigorously yet.
The telescope is running 4 different honeypots at the moment (nepenthes, dionaea, kippoand amun), but the Surfnetids has been designed to work with their own honeypots. De-ploying a tunnelserver instance on dialga and Surfnet sensors on the server togepi wouldpermit to finalize the upgrade and honeypots diversification.
Finally, in the experimentation environment, we still have to upgrade the VMWare Playeron mankey to VMWare vSphere. If we want to open this environment to partners, OpenManagement Framework 8 must be investigated and maybe deployed if required. We mayneed as well to define and write an NDA or other kind of document.
8http://omf.mytestbed.net/projects/omf/wiki
INRIA
LHS - Network Telescope Upgrade 17
6 Conclusion
We performed a full software upgrade of the platform and upgraded the main networkconnection to match the new requirements following the update. We have 81 sensors runningbased on 4 different honeypots, and still need to deploy 14 on the server togepi. We keep oncollecting network traces and flow records together with the binaries and attacks details onthe honeypots.
We have also developed the experimentation environment and performed various studieson various subjects (P2P monitoring and observation, VoIP honeypots...).
With the new honeypots, we obtained the following results the 11th November 2010:
Figure 4: Results for the 11th November 2010Detected Connections Statistics
Possible malicious attack 225 258Malicious attack 53 337
Nepenthes 49Amun 14 444Dionaea 37 141Kippo 1 703
Malware offered 52 235Malware downloaded 11 858
Overall, since the 9th September 2008, the telescope suffered the following attacks:
Figure 5: Results since the 9th September 2008Detected Connections Statistics
Possible malicious attack 34 962 357Malicious attack 2 296 567
Nepenthes 1 384 035Amun 15 982Dionaea 804 682Kippo 91 868
Malware offered 2 222 058Malware downloaded 3 352 845Total number of unique binaries 125 605
We collected a total of 1 010GB of pcap dump traces and 20GB of network flow records.
RT n° 9999
Unite de recherche INRIA LorraineLORIA, Technopole de Nancy-Brabois - Campus scientifique
615, rue du Jardin Botanique - BP 101 - 54602 Villers-les-Nancy Cedex (France)
Unite de recherche INRIA Futurs : Parc Club Orsay Universite - ZAC des Vignes4, rue Jacques Monod - 91893 ORSAY Cedex (France)
Unite de recherche INRIA Rennes : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex (France)Unite de recherche INRIA Rhone-Alpes : 655, avenue de l’Europe - 38334 Montbonnot Saint-Ismier (France)
Unite de recherche INRIA Rocquencourt : Domaine de Voluceau - Rocquencourt - BP 105 - 78153 Le Chesnay Cedex (France)Unite de recherche INRIA Sophia Antipolis : 2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex (France)
EditeurINRIA - Domaine de Voluceau - Rocquencourt, BP 105 - 78153 Le Chesnay Cedex (France)
http://www.inria.fr
ISSN 0249-0803
top related