Welcome!

@DevOpsSummit Authors: Zakia Bouachraoui, Stackify Blog, Jnan Dash, Liz McMillan, Janakiram MSV

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo, @DXWorldExpo

@DevOpsSummit: Blog Post

What Is Syslog?

What is Syslog, why Syslog, and how does it work?

What Is Syslog?

This post has been written by Dr. Miao Wang, a Post-Doctoral Researcher at the Performance Engineering Lab at University College Dublin.

This post is the first in a multi-part series of posts on the many options for collecting and forwarding log data from different platforms and the pros and cons of each. In this first post we will focus on Syslog, and will provide background on the protocol.

What is Syslog?

Syslog has been around for a number of decades and provides a protocol used for transporting event messages between computer systems and software applications. The protocol utilizes a layered architecture, which allows the use of any number of transport protocols for transmission of syslog messages. It also provides a message format that allows vendor-specific extensions to be provided in a structured way. Syslog is now standardized by the IETF in RFC 5424 (since 2009), but has been around since the 80's and for many years served as the de facto standard for logging without any authoritative published specification.

Syslog has gained significant popularity and wide support on major operating system platforms and software frameworks and is officially supported on almost all versions of Linux, Unix, and MacOS platforms. On Microsoft Windows, Syslog can also be supported through a variety of open source and commercial third-party libraries.

Best practices often promote storing log messages on a centralized server that can provide a correlated view on all the log data generated by different system components. Otherwise, analyzing each log file separately and then manually linking each related log message is extremely time-consuming. As a result, forwarding local log messages to a 
remote log analytics server/service via Syslog has been commonly adopted as a standard industrial logging solution.

How it works?

The Syslog standard defines three different layers, namely the Syslog content, the Syslog application and the Syslog transport. The content refers to the information contained in a Syslog event message. The application layer is essentially what generates, interprets, routes, and stores the message while the Syslog transport layer transmits the message via the network.

Screen Shot 2014-08-29 at 8.55.34 AM

Diagram 1 from the RFC 5424 Syslog Spec

According to the Syslog specification, there is no acknowledgement for message delivery and although some transports may provide status information, the protocol is described as a pure simplex protocol. Sample deployment scenarios in the spec show arrangements where messages are said to be created by an ‘originator' and forwarded on to a ‘collector' (generally a logging server or service used for centralized storage of log data). Note ‘relays ' can also be used between the originator and the collector and can do some processing on the data before it is sent on (e.g. filtering out events, combining sources of event data).

Applications can be configured to send messages to multiple destinations, and individual syslog components may be running in the same host machine.

The Syslog Format

Sharing log data between different applications requires a standard definition and format on the log message, such that both parties can interpret and understand each other's information. To provide this, RFC 5424 defines the Syslog log message format and rules for each data element within each message.

A syslog message has the following format: A header, followed by structured-data (SD), followed by a message.

The header of the Syslog message contains "priority", "version", "timestamp", "hostname", "application", "process id", and "message id". It is followed by structured-data, which contains data blocks in the "key=value" format enclosed in square brackets "[]", e.g. [[email protected] utilization="high" os="linux"] [[email protected] class="medium"]. In the example image below, the SD is simply represented as "-", which is a null value (nilvalue as specified by RFC 5424). After the SD value, BOM represents the UTF-8 and "su root failed on /dev/pts/7" shows the detailed log message, which should be encoded UTF-8. (For more details of the data elements of SLP, please refer to: http://tools.ietf.org/html/rfc5424)

Untitled

A Sample Syslog Message with Format Broken Out

Why Syslog?

The complexity of modern application and systems is ever increasing and to understand the behavior of complex systems, administrators/developers/Ops etc. often need to collect and monitor all relevant information produced by their applications. Such information often needs to be analysed and correlated to determine how their systems are behaving. Consequently, administrators can apply data analytic techniques to either diagnose root causes once problems occur or gain an insight into current system behavior based on statistical analysis.

Frequently, logs have been applied as a primary and reliable data source to fulfill such a mission for lots of reasons, some of which I've listed here:

  • Logs can provide transient information for administrators to roll back the system to a proper status after a failure accident. E.g. when a banking system fails, all transactions lost from the main memory can be recorded in the logs.
  • Logs can contain a rich diversity of substantial information produced by individual applications to allow administrators/developers/ops teams to understand system behavior from many aspects such as current system statistics, trend predictions, and troubleshooting.
  • Logs are written externally by the underlying application to hard disks and external services such that by reading these log files, there will not be any direct performance impact on the monitored system. Therefore, in a production environment administrators can safely monitor running applications via their logs without worrying about impacting performance.

However, a key aspect of log analysis is to understand the format of the arriving log data, especially in a heterogeneous environment where different applications may be developed using different log formats and network protocols to send these log data. Unless this is well defined, it is quite difficult to interpret log messages sent by an unknown application. To solve this issue Syslog defines a logging standard for different systems and applications to follow in order to easily exchange log information. Based on the logging protocol, applications can effectively interpret each log attribute to understand the meaning of the log message.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. DevOpsSUMMIT at CloudEXPO expands the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike.
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developers don't think about, but you can even use Docker with ASP.NET.
If you are part of the cloud development community, you certainly know about “serverless computing,” almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity. We came from monolithic computing to client-server to services to microservices to the serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-function. Software is developed and deployed as individual functions – a first-class object and cloud runs it for you. These functions are triggered by events that follow certain rules. Functions are written in a fixed set of languages, with a fixed set of programming models and cloud-specific syntax and semantics. Cloud-specific services can be invoked to perform complex tasks. So for cloud-na...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex to learn. This is because Kubernetes is more of a toolset than a ready solution. Hence it’s essential to know when and how to apply the appropriate Kubernetes constructs.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we've lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the...