Welcome!

@DevOpsSummit Authors: Carmen Gonzalez, Pat Romanski, Mehdi Daoudi, Ruxit Blog, Liz McMillan

Related Topics: Industrial IoT, Linux Containers, Containers Expo Blog, @DevOpsSummit

Industrial IoT: Article

Five #Logstash Alternatives | @DevOpsSummit @Sematext #Elasticsearch

Shippers have their pros and cons, and ultimately it’s down to your specifications

When it comes to centralizing logs to Elasticsearch, the first log shipper that comes to mind is Logstash. People hear about it even if it's not clear what it does:
- Bob: I'm looking to aggregate logs
- Alice: you mean... like... Logstash?

When you get into it, you realize centralizing logs often implies a bunch of things, and Logstash isn't the only log shipper that fits the bill:

  • fetching data from a source: a file, a UNIX socket, TCP, UDP...
  • processing it: appending a timestamp, parsing unstructured data, adding Geo information based on IP
  • shipping it to a destination. In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry

In this post, we'll describe Logstash and its alternatives - 5 "alternative" log shippers (Filebeat, Fluentd, rsyslog, syslog-ng and Logagent), so you know which fits which use-case.

Logstash
It's not the oldest shipper of this list (that would be syslog-ng, ironically the only one with "new" in its name), it's certainly the best known. That's because it has lots of plugins: inputs, codecs, filters and outputs. Basically, you can take pretty much any kind of data, enrich it as you wish, then push it to lots of destinations.

Strengths
Logstash's main strongpoint is flexibility, due to the number of plugins. Also, its clear documentation and straightforward configuration format means it's used in a variety of use-cases. This leads to a virtuous cycle: you can find online recipes for doing pretty much anything. Here are a few examples from us: 5 minute intro, reindexing data in Elasticsearch, parsing Elasticsearch logs, rewriting Elasticsearch slowlogs so you can replay them with JMeter.

Weaknesses
Logstash's Achille's heel has always been performance and resource consumption (the default heap size is 1GB). Though performance improved a lot over the years, it's still a lot slower than the alternatives. We've done some benchmarks comparing Logstash to rsyslog and to filebeat and Elasticsearch's Ingest node. This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.

Another problem is that Logstash currently doesn't buffer yet. A typical workaround is to use Redis or Kafka as a central buffer:

Logstash - Kafka - Elasticsearch

Typical use-case
Because of the flexibility and abundance of recipes, Logstash is a great tool for prototyping, especially for more complex parsing. If you have big servers, you might as well install Logstash on each. You won't need buffering if you're tailing files, because the file itself can act as a buffer (i.e. Logstash remembers where it left off):

Logstash - Elasticsearch (1)

If you have small servers, installing Logstash on each is a no go, so you'll need a lightweight log shipper on them, that could push data to Elasticsearch though one (or more) central Logstash servers:

Light shipper - Logstash - Elasticsearch

As your logging project moves forward, you may or may not need to change your log shipper because of performance/cost. When choosing whether Logstash performs well enough, it's important to have a good estimation of throughput needs - which would predict how much you'd spend on Logstash hardware.

Filebeat
As part of the Beats "family", Filebeat is a lightweight log shipper that came to life precisely to address the weakness of Logstash: Filebeat was made to be that lightweight log shipper that pushes to Logstash.

With version 5.x, Elasticsearch has some parsing capabilities (like Logstash's filters) called Ingest. This means you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing. You shouldn't need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off:

Filebeat - Ingest - Elasticsearch

If you need buffering (e.g. because you don't want to fill up the file system on logging servers), you can use Redis/Kafka, because Filebeat can talk to them:

Filebeat - Kafka - Elasticsearch

Strengths
Filebeat is just a tiny binary with no dependencies. It takes very little resources and, though it's young, I find it quite reliable - mainly because it's simple and there are few things that can go wrong. That said, you have lots of knobs regarding what it can do. For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn't get changes for a while.

Weaknesses
Filebeat's scope is very limited, so you'll have a problem to solve somewhere else. For example, if you use Logstash down the pipeline, you have about the same performance issue. Because of this, Filebeat's scope is growing. Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.

Typical use-cases
Filebeat is great for solving a specific problem: you log to files, and you want to either:

  • ship directly to Elasticsearch. This works if you want to just "grep" them or if you log in JSON (Filebeat can parse JSON). Or, if you want to use Elasticsearch's Ingest for parsing and enriching (assuming the performance and functionality of Ingest fits your needs)
  • put them in Kafka/Redis, so another shipper (e.g. Logstash, or a custom Kafka consumer) can do the enriching and shipping. This assumes that the chosen shipper fits your functionality and performance needs

Logagent
This is our log shipper that was born out of the need to make it easy for someone who didn't use a log shipper before to send logs to Logsene (our logging SaaS which exposes the Elasticsearch API). And because Logsene exposes the Elasticsearch API, Logagent can be just as easily used to push data to Elasticsearch.

Strengths
The main one is ease of use: if Logstash is easy (actually, you still need a bit of learning if you never used it, that's natural), this one really gets you started in a minute. It tails everything in /var/log out of the box, parses various logging formats out of the box (Elasticsearch, Solr, MongoDB, Apache HTTPD...). It can mask sensitive data like PII, date of birth, credit card numbers, etc. It will also do GeoIP enriching based on IPs (e.g., for access logs) and update the GeoIP database automatically. It's also light and fast, you'll be able to put it on most logging boxes (unless you have very small ones, like appliances). The new 2.x version added support for pluggable inputs and outputs in a form of 3rd party node.js modules. Very importantly, Logagent has local buffering so, unlike Logstash, it will not lose your logs when the destination is not available.

Weaknesses
Logagent is still young, although is developing and maturing quickly. It has some interesting functionality (e.g. it accepts Heroku or CloudFoundry logs), but it is not yet as flexible as Logstash.

Typical use-cases
Logagent is a good choice of a shipper that can do everything (tail, parse, buffer - yes, it can buffer on disk - and ship) that you can install on each logging server. Especially if you want to get started quickly. Logagent is embedded in Sematext Docker Agent to parse and ship Docker containers logs. Sematext Docker Agent works with Docker Swarm, Docker Datacenter, Docker Cloud, as well as Amazon EC2, Google Container Engine, Kubernetes, Mesos, RancherOS, and CoreOS, so for Docker log shipping, this is the tool to use.

rsyslog
The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages. It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch. You can find a howto for processing Apache and system logs here.

Strengths
rsyslog is the fastest shipper that we tested so far. If you use it as a simple router/shipper, any decent machine will be limited by network bandwidth, but it really shines when you want to parse multiple rules. Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim). This means that with 20-30 rules, like you have when parsing Cisco logs, it can outperform the regex-based parsers like grok by a factor of 100 (it can be more or less, depending on the grok implementation and liblognorm version).

It's also one of the lightest parsers you can find, depending on the configured memory buffers.

Weaknesses
rsyslog requires more work to get the configuration right (you can find some sample configuration snippets here on our blog) and this is made more difficult by two things:

  • documentation is hard to navigate, especially for somebody new to the terminology
  • versions up to 5.x had a different configuration format (expanded from the syslogd config format, which it still supports). Newer versions can still work with the old format, but most newer features (like the Elasticsearch output) only work with the new configuration format, but then again there are older plugins (for example, the Postgres output) which only support the old format

Though rsyslog tends to be reliable once you get to a stable configuration (and it's rich enough that there are usually multiple ways of getting the same result), you're likely to find some interesting bugs along the way. Not all features are tested as part of the testbench.

Typical use-cases
rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container). If you need to do processing in another shipper (e.g. Logstash) you can forward JSON over TCP for example, or connect them via a Kafka/Redis buffer.

rsyslog also works well when you need that ultimate performance. Especially if you have multiple parsing rules. Then it makes sense to invest time in getting that configuration working.

syslog-ng
You can think of syslog-ng as an alternative to rsyslog (though historically it was actually the other way around). It's also a modular syslog daemon, that can do much more than just syslog. It recently received disk buffers and an Elasticsearch HTTP output. Equipped with a grammar-based parser (PatternDB), it has all you probably need to be a good log shipper to Elasticsearch.

Advantages
Like rsyslog, it's a light log shipper and it also performs well. It used to be a lot slower than rsyslog before, and I haven't benchmarked the two recently, but 570K logs/s two years ago isn't bad at all. Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.

Disadvantages
The main reason why distros switched to rsyslog was syslog-ng Premium Edition, which used to be much more feature-rich than the Open Source Edition which was somewhat restricted back then. We're concentrating on the Open Source Edition here, all these log shippers are open source. Things have changed in the meantime, for example disk buffers, which used to be a PE feature, landed in OSE. Still, some features, like the reliable delivery protocol (with application-level acknowledgements) have not made it to OSE yet.

Typical use-cases
Similarly to rsyslog, you'd probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing. As with rsyslog, there's a Kafka output that allows you to use Kafka as a central queue and potentially do more processing in Logstash or a custom consumer:

syslog-ng - Kafka - Elasticsearch

The difference is, syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance: for example, only outputs are buffered, so processing is done before buffering - meaning that a processing spike would put pressure up the logging stream.

Fluentd
Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don't have to guess which substring is which field of which type. As a result, there are libraries for virtually every language, meaning you can easily plug in your custom applications to your logging pipeline.

Advantages
Like most Logstash plugins, Fluentd plugins are in Ruby and very easy to write. So there are lots of them, pretty much any source and destination has a plugin (with varying degrees of maturity, of course). This, coupled with the "fluent libraries" means you can easily hook almost anything to anything using Fluentd.

Disadvantages
Because in most cases you'll get structured data through Fluentd, it's not made to have the flexibility of other shippers on this list (Filebeat excluded). You can still parse unstructured via regular expressions and filter them using tags, for example, but you don't get features such as local variables or full-blown conditionals. Also, while performance is fine for most use-cases, it's not in on the top of this list: buffers exist only for outputs (like in syslog-ng), single-threaded core and the Ruby GIL for plugins means ultimate performance on big boxes is limited, but resource consumption is acceptable for most use-cases. For small/embedded devices, you might want to look at Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.

Typical use-cases
Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins. Also, if most of the sources are custom applications, you may find it easier to work with fluent libraries than coupling a logging library with a log shipper. Especially if your applications are written in multiple languages - meaning you'd use multiple logging libraries, which may behave differently.

The conclusion?
First of all, the conclusion is that you're awesome for reading all the way to this point. If you did that, you get the nuances of an "it depends on your use-case" kind of answer. All these shippers have their pros and cons, and ultimately it's down to your specifications (and in practice, also to your personal preferences) to choose the one that works best for you. If you need help deciding, integrating, or really any help with logging don't be afraid to reach out - we offer Logging Consulting. Similarly, if you are looking for a place to ship your logs and avoid costs/headaches associated with running the full ELK/Elastic Stack on your own servers, check out Logsene - it exposes Elasticsearch API, so you can use it with all shippers we covered here.

The post 5 Logstash Alternatives appeared first on Sematext.

More Stories By Radu Gheorghe

Radu Gheorghe is a search consultant, software engineer and trainer at Sematext Group, working mainly with Elasticsearch, Solr and logging-related projects. He is the co-author of Elasticsearch in Action.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Cloud Expo, Inc. has announced today that Aruna Ravichandran, vice president of DevOps Product and Solutions Marketing at CA Technologies, has been named co-conference chair of DevOps at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you'll have no problem filling in your buzzword bingo cards. Evangelist for F5 Networks
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.
Tricky charts and visually deceptive graphs often make a case for the impact IT performance has on business. The debate isn't around the obvious; of course, IT performance metrics like website load time influence business metrics such as conversions and revenue. Rather, this presentation will explore various data analysis concepts to understand how, and how not to, assert such correlations. In his session at 20th Cloud Expo, Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, will review data analysis basics, and then move into some data-to-actionable information concepts. Afterward, YOU decide whether to use your newfound knowledge for good or evil.
Some people worry that OpenStack is more flash then substance; however, for many customers this could not be farther from the truth. No other technology equalizes the playing field between vendors while giving your internal teams better access than ever to infrastructure when they need it. In his session at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will talk through some real-world OpenStack deployments and look into the ways this can benefit customers of all sizes. He will also talk through Nutanix's OpenStack integrations and show how together these technologies give IT professionals a new way to approach infrastructure in today's cloud world.
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his general session at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore the ways that Nutanix technologies empower teams to react faster than ever before and connect teams in ways that were either too complex or simply impossible with traditional infrastructures.
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you can concentrate on running a better service for your users.
SYS-CON Events announced today that LeaseWeb USA, a cloud Infrastructure-as-a-Service (IaaS) provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. LeaseWeb is one of the world's largest hosting brands. The company helps customers define, develop and deploy IT infrastructure tailored to their exact business needs, by combining various kinds cloud solutions.
In his session at @DevOpsSummit at 19th Cloud Expo, Robert Doyle, lead architect at eCube Systems, will examine the issues and need for an agile infrastructure and show the advantages of capturing developer knowledge in an exportable file for migration into production. He will introduce the use of NXTmonitor, a next-generation DevOps tool that captures application environments, dependencies and start/stop procedures in a portable configuration file with an easy-to-use GUI. In addition to capturing configuration information between Development, Test and Production, the case study shows how NXTmonitor can create dependencies, automate health scripts and scalable performance groups to handle peak production loads.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to great conferences, helping you discover new conferences and increase your return on investment.
Whether they’re located in a public, private, or hybrid cloud environment, cloud technologies are constantly evolving. While the innovation is exciting, the end mission of delivering business value and rapidly producing incremental product features is paramount. In his session at @DevOpsSummit at 19th Cloud Expo, Kiran Chitturi, CTO Architect at Sungard AS, discussed DevOps culture, its evolution of frameworks and technologies, and how it is achieving maturity. He also covered various styles and stacks in DevOps with examples and live demos – on AWS using tools/techniques for continuous integration, configuration management and delivery orchestration.
“We're a global managed hosting provider. Our core customer set is a U.S.-based customer that is looking to go global,” explained Adam Rogers, Managing Director at ANEXIA, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, will discuss the main security considerations enterprises face when rolling out SDDCs and how they can harness key functionality of a virtual environment to achieve more granular security controls across hybrid environments.
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of SolidFire, discussed how to leverage this concept to seize on the creativity and business agility to make it real.
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cloud marketplaces and DevOps are changing the economics of hosting and delivering software.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your containers from your laptop to the cloud. How do we build software for containers? How do we ship containers? How do we do all of it without shooting ourselves in the foot?
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, will discuss why this matters, how innovation will benefit, and how to foster an interest in tech. Learn why women need to take risks and embrace failure to give them the courage to crash through the glass ceiling. Sandy will share some of her own experiences and advice with 5 career hacks to help women ge...
"We host and fully manage cloud data services, whether we store, the data, move the data, or run analytics on the data," stated Kamal Shannak, Senior Development Manager, Cloud Data Services, IBM, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the works because of misaligned incentives.
SYS-CON Events announced today that delaPlex will exhibit at SYS-CON's @CloudExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. delaPlex pioneered Software Development as a Service (SDaaS), which provides scalable resources to build, test, and deploy software. It’s a fast and more reliable way to develop a new product or expand your in-house team.
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development process, accelerate application delivery times, and ensure that developers will become heroes (not bottlenecks) in the IoT revolution.
SYS-CON Events announced today that WineSOFT will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Based in Seoul and Irvine, WineSOFT is an innovative software house focusing on internet infrastructure solutions. The venture started as a bootstrap start-up in 2010 by focusing on making the internet faster and more powerful. WineSOFT’s knowledge is based on the expertise of TCP/IP, VPN, SSL, peer-to-peer, mobile browser, and live streaming solutions.