1 vic hargrave | vichargrave@gmail.com | @vichargrave

Post on 17-Dec-2015

235 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

Vic Hargrave | vichargrave@gmail.com | @vichargrave

OSSEC Log Management with Elasticsearch

2

$ whoami

• Software Architect for Trend Micro Data Analytics Group

• Blogger for Trend Micro Security Intelligence and Simply Security

• Email: vichargrave@gmail.com

• Website: vichargrave.com

• Twitter: @vichargrave

• LinkedIn: www.linkedin.com/in/vichargrave

3

OSSEC does SIEMs

commercial or open source

SIEM

Syslog

Syslog

Syslog

syslog

4

Commercial SIEMs are great, but…

=+commercial

SIEM

5

Now there’s a whole new (open-source) ballgame

Logstash Kibana

6

OSSEC Log Management with Elasticsearch

7

Elasticsearch

• Open source, distributed, full text search engine

• Based on Apache Lucene

• Stores data as structured JSON documents

• Supports single system or multi-node clusters

• Easy to set up and scale – just add more nodes

• Provides a RESTful API

• Installs with RPM or DEB packages and is controlled with a service script.

8

Elasticseach Elements

• Index – contains documents, ≅ table

• Document – contains fields, ≅ row

• Field – contains string, integer, JSON object, etc.

• Shard – smaller divisions of data that can be stored across nodes

• Replica – copy of the primary shard

9

Elasticsearch Multi-node Configuration

# default configuration file - /etc/elasticsearch/elasticsearch.yml

######################### Cluster #########################

# Cluster name identifies your cluster for auto-discovery#cluster.name: ossec-mgmt-cluster

########################## Node ###########################

# Node names are generated dynamically on startup, so you're relieved# from configuring them manually. You can tie this node to a specific name:#node.name: "es-node-1" # e.g. Elasticsearch nodes numbered 1 – N

########################## Paths ##########################

# Path to directory where to store index data allocated for this node.#path.data: /data/0, /data/1

10

Logstash

• Log aggregator and parser

• Supports transferring parsed data directly to Elasticsearch

• Controlled by a configuration file that specifies input, filtering (parsing) and output

• Key to adapting Elasticsearch to other log formats

• Run logstash in logstash home directory as follows:

bin/logstash ––conf <logstash config file>

11

input {# stdin{}   udp {      port => 9000      type => "syslog"   }} filter {   if [type] == "syslog" {     grok { # SEE NEXT SLIDE     }     mutate {       remove_field => [ "syslog_hostname", "syslog_message", "syslog_pid", "message", "@version", "type", "host" ]     }   }} output {#  stdout {#    codec => rubydebug#  }    elasticsearch_http {      host => "10.0.0.1"    }}

OSSEC – logstash.conf

12

OSSEC Alert Parsing

• OSSEC syslog alert

• grok { }

Jan 7 11:44:30 ossec ossec: Alert Level: 3; Rule: 5402 - Successful sudo to ROOT executed; Location: localhost->/var/log/secure; user: user; Jan 7 11:44:29 localhost sudo: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/bin/su

match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_host} %{DATA:syslog_program}: Alert Level: %{NONNEGINT:Alert_Level}; Rule: %{NONNEGINT:Rule} - %{DATA:Description}; Location: %{DATA:Location}; (srcip: %{IP:Src_IP};%{SPACE})? (dstip: %{IP:Dst_IP};%{SPACE})? (src_port: %{NONNEGINT:Src_Port};%{SPACE})? (dst_port: %{NONNEGINT:Dst_Port};%{SPACE})? (user: %{USER:User};%{SPACE})?%{GREEDYDATA:Details}" }add_field => [ "ossec_server", "%{host}" ]

13

Kibana

• General purpose query UI

• Javascript implementation

• Query Elasticsearch without coding

• Includes many widgets

• Run Kibana in browser as follows:

http://<web server ip>:<port>/<kibana path>

14

Kibana – config.js

/** @scratch /configuration/config.js/5 * ==== elasticsearch * * The URL to your elasticsearch server. You almost certainly don't * want +http://localhost:9200+ here. Even if Kibana and Elasticsearch * are on the same host. By default this will attempt to reach ES at the * same host you have kibana installed on. You probably want to set it to * the FQDN of your elasticsearch host */ elasticsearch: http://+"<elasticsearch node IP>"+":9200",

15

16

17

Elasticsearch Cluster Management

• ElasticHQ

• Elasticsearch plug-in

• Install from Elasticsearch home directory:

bin/plugin -install royrusso/elasticsearch-HQ

• Provides cluster and node management metrics and controls

18

19

20

And now for something

completely different.

The OSSEC virtual appliance

21

Back to Reality

Free

22

• Designed to work in a trusted environment

• No built in security

• Easy to erase all the data

• Use with a proxy that provides authentication and request filtering such as Nginx– http://wiki.nginx.org/Main

Elasticsearch Security Caveats

curl –XDELETE http://<server>:9200/_all

23

Further Information

• Elasticsearch– http://www.elasticsearch.org

• Logstash– http://logstash.net

• Kibana– http://www.elasticsearch.org/overview/kibana/

• ElasticHQ– http://elastichq.org

• Elasticsearch for Logging– http://vichargrave.com/ossec-log-management-with-elasticsearch/– http://edgeofsanity.net/article/2012/12/26/elasticsearch-for-logging.html

24

Thanks for attending!

Any questions?

top related