Welcome!

@DevOpsSummit Authors: Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo, @DXWorldExpo

@DevOpsSummit: Blog Post

What Is Syslog?

What is Syslog, why Syslog, and how does it work?

What Is Syslog?

This post has been written by Dr. Miao Wang, a Post-Doctoral Researcher at the Performance Engineering Lab at University College Dublin.

This post is the first in a multi-part series of posts on the many options for collecting and forwarding log data from different platforms and the pros and cons of each. In this first post we will focus on Syslog, and will provide background on the protocol.

What is Syslog?

Syslog has been around for a number of decades and provides a protocol used for transporting event messages between computer systems and software applications. The protocol utilizes a layered architecture, which allows the use of any number of transport protocols for transmission of syslog messages. It also provides a message format that allows vendor-specific extensions to be provided in a structured way. Syslog is now standardized by the IETF in RFC 5424 (since 2009), but has been around since the 80's and for many years served as the de facto standard for logging without any authoritative published specification.

Syslog has gained significant popularity and wide support on major operating system platforms and software frameworks and is officially supported on almost all versions of Linux, Unix, and MacOS platforms. On Microsoft Windows, Syslog can also be supported through a variety of open source and commercial third-party libraries.

Best practices often promote storing log messages on a centralized server that can provide a correlated view on all the log data generated by different system components. Otherwise, analyzing each log file separately and then manually linking each related log message is extremely time-consuming. As a result, forwarding local log messages to a 
remote log analytics server/service via Syslog has been commonly adopted as a standard industrial logging solution.

How it works?

The Syslog standard defines three different layers, namely the Syslog content, the Syslog application and the Syslog transport. The content refers to the information contained in a Syslog event message. The application layer is essentially what generates, interprets, routes, and stores the message while the Syslog transport layer transmits the message via the network.

Screen Shot 2014-08-29 at 8.55.34 AM

Diagram 1 from the RFC 5424 Syslog Spec

According to the Syslog specification, there is no acknowledgement for message delivery and although some transports may provide status information, the protocol is described as a pure simplex protocol. Sample deployment scenarios in the spec show arrangements where messages are said to be created by an ‘originator' and forwarded on to a ‘collector' (generally a logging server or service used for centralized storage of log data). Note ‘relays ' can also be used between the originator and the collector and can do some processing on the data before it is sent on (e.g. filtering out events, combining sources of event data).

Applications can be configured to send messages to multiple destinations, and individual syslog components may be running in the same host machine.

The Syslog Format

Sharing log data between different applications requires a standard definition and format on the log message, such that both parties can interpret and understand each other's information. To provide this, RFC 5424 defines the Syslog log message format and rules for each data element within each message.

A syslog message has the following format: A header, followed by structured-data (SD), followed by a message.

The header of the Syslog message contains "priority", "version", "timestamp", "hostname", "application", "process id", and "message id". It is followed by structured-data, which contains data blocks in the "key=value" format enclosed in square brackets "[]", e.g. [[email protected] utilization="high" os="linux"] [[email protected] class="medium"]. In the example image below, the SD is simply represented as "-", which is a null value (nilvalue as specified by RFC 5424). After the SD value, BOM represents the UTF-8 and "su root failed on /dev/pts/7" shows the detailed log message, which should be encoded UTF-8. (For more details of the data elements of SLP, please refer to: http://tools.ietf.org/html/rfc5424)

Untitled

A Sample Syslog Message with Format Broken Out

Why Syslog?

The complexity of modern application and systems is ever increasing and to understand the behavior of complex systems, administrators/developers/Ops etc. often need to collect and monitor all relevant information produced by their applications. Such information often needs to be analysed and correlated to determine how their systems are behaving. Consequently, administrators can apply data analytic techniques to either diagnose root causes once problems occur or gain an insight into current system behavior based on statistical analysis.

Frequently, logs have been applied as a primary and reliable data source to fulfill such a mission for lots of reasons, some of which I've listed here:

  • Logs can provide transient information for administrators to roll back the system to a proper status after a failure accident. E.g. when a banking system fails, all transactions lost from the main memory can be recorded in the logs.
  • Logs can contain a rich diversity of substantial information produced by individual applications to allow administrators/developers/ops teams to understand system behavior from many aspects such as current system statistics, trend predictions, and troubleshooting.
  • Logs are written externally by the underlying application to hard disks and external services such that by reading these log files, there will not be any direct performance impact on the monitored system. Therefore, in a production environment administrators can safely monitor running applications via their logs without worrying about impacting performance.

However, a key aspect of log analysis is to understand the format of the arriving log data, especially in a heterogeneous environment where different applications may be developed using different log formats and network protocols to send these log data. Unless this is well defined, it is quite difficult to interpret log messages sent by an unknown application. To solve this issue Syslog defines a logging standard for different systems and applications to follow in order to easily exchange log information. Based on the logging protocol, applications can effectively interpret each log attribute to understand the meaning of the log message.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
For better or worse, DevOps has gone mainstream. All doubt was removed when IBM and HP threw up their respective DevOps microsites. Where are we on the hype cycle? It's hard to say for sure but there's a feeling we're heading for the "Peak of Inflated Expectations." What does this mean for the enterprise? Should they avoid DevOps? Definitely not. Should they be cautious though? Absolutely. The truth is that DevOps and the enterprise are at best strange bedfellows. The movement has its roots in the tech community's elite. Open source projects and methodologies driven by the alumni of companies like Netflix, Google and Amazon. This is a great thing for the evolution of DevOps. It can be alienating for Enterprise IT though. Learning about Netflix and their simian armies, or Facebook and their mind-melting scale is fascinating. Can you take it back to the office on Monday morning though?
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examining how the Internet and the cloud has allowed for the democratization of IT, resulting in an increased demand for the cloud and the drive to develop new ways to utilize it.