Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Zakia Bouachraoui, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, Agile Computing

@DevOpsSummit: Blog Post

Logentries and Logstash Integration | @DevOpsSummit [#DevOps]

Logstash is an open source tool for managing events and logs

Getting Started with the Logentries & Logstash Integration
by Bartlomiej Siniarski

Logstash is an open source tool for managing events and logs. It is used to collect, search and store logs for later use. If you are using Logstash to collect logs from across your infrastructure already, and you are looking for more sophisticated log analytics tool, you are in the right place.

I will show you how to configure Logstash to forward all your logs to your Logentries account using the plugin and token connection.

Prerequisites

The contrib plugins come with a pre-installed Logentries plugin. In order to forward logs from Logstash to your Logentries account you need to create a configuration file in your main Logstash folder. Each plugin has different settings for configuring it. There are three main sections in every configuration file: inputs, filters, outputs.

#Configuration file
input {
...
}
filter {
...
}
output {
...
}

Let's call our configuration file connection.conf for now and start to fill out these fields one by one.

Input
The input section can be configured to read from Elasticsearch cluster, local file, syslog, tcp, udp, Heroku and many more. In this post we are going to read from our local access.log file.

input {
file {
path => "/var/log/access.log"
}
}

The user is able to assign additional setting to the input configuration such as:

  • path
  • codec
  • start_position
  • tags
  • host
  • port

Parameters listed above vary based on input source and configuration.

Filter
Filters are used as intermediary processing devices in the Logstash chain. They are often combined with conditionals in order to perform a certain action on an event, if it matches particular criteria. I will present the output with and without active filter.

filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}

Ok, what is actually happening here?

Firstly, we are using grok filter, which is currently the best way in Logstash to parse badly, unstructured log data into something structured and queryable. Grok makes it easy for you to parse logs with regular expressions, by assigning labels to commonly used patterns. One such label is called COMBINEDAPACHELOG.

Filter Inactive

46.7.24.63 LOG message='111.141.244.242 - kurt [18/May/2011:01:48:10 -0700] "GET /admin HTTP/1.1" 301 566 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"' @version=1 @timestamp='2015-02-19T17:59:49.834Z' host='Bart-MacBook-Pro.local' path='/var/log/Apache.log'

Filter Active

46.7.24.63 LOG message='111.141.244.242 - kurt [18/May/2011:01:48:10 -0700] "GET /admin HTTP/1.1" 301 566 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"' @version=1 @timestamp='2015-02-19T18:07:37.437Z' host='Bart-MacBook-Pro.local' path='/var/log/Apache.log' clientip=111.141.244.242 ident='-' auth=kurt timestamp='18/May/2011:01:48:10 -0700' verb=GET request='/admin' httpversion=1.1 response=301 bytes=566 referrer='"-"' agent='"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"'

Output
This section takes advantage of the plugin and configures Logstash to forward all logs from access.log locally stored in our machine to Logentries account using unique token.

output {
logentries{
token => "LOGENTRIES_TOKEN"
}
}

Start Sending Logs
The plugin has to be stored in your logstash-outputs folder:

logstash-x.x.x
├── bin
├── lib
│ └── logstash
│ └── outputs
│ └── logentries.rb
├── LICENSE
├── locales
├── connection.conf
├── patterns
├── README.md
├── spec

Simply save your configuration file and run bin/logstash -f connection.conf. Your logs will now forward directly into your Logentries account and be easily accessible for tagging, real-time alerting, and data visualizations. Don't have a Logentries account? Get started here in minutes for free!

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
DXWorldEXPO LLC announced today that Telecom Reseller has been named "Media Sponsor" of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"We host and fully manage cloud data services, whether we store, the data, move the data, or run analytics on the data," stated Kamal Shannak, Senior Development Manager, Cloud Data Services, IBM, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.