Welcome!

@DevOpsSummit Authors: Pat Romanski, Liz McMillan, Elizabeth White, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Microservices Expo, Recurring Revenue, Cloud Security

@DevOpsSummit: Blog Post

A New Chapter in Log Management By @TrevParsons | @DevOpsSummit [#DevOps]

Organizations use logs for debugging during development, for monitoring and troubleshooting production systems

Unlimited Logging: A New Chapter in Log Management

It's no secret that log data is quickly becoming one of the most valuable sources of information within organizations. There are open source, on-premise, and cloud-based solutions to help you glean value from your logs in many different ways.

Largely, organizations use logs for debugging during development, for monitoring and troubleshooting production systems, for security audit trails and forensics, and (more and more) for different business use cases that transcend product management and marketing teams.

I love seeing logs used in untraditional ways, for example:

  • How Elon Musk used logs to call out a New York Times journalist after an unfavorable review of the performance of the (at the time) new Model S Telsa. Musk went back to the logs to outline ‘exactly' what happened during the test drive vs. what was claimed, highlighting the value of maintaining the log level evidence of your systems just in case you ever need it.
  • Monitoring your users in real time. Using javascript logging you can log directly from the client's browser as they navigate your app - giving you insight into your customers' behavior. In the past this was achieved by observing activity on the store room floor, where you could see how customers were congregating, and determine what items were popular. Today, where your customers are always online, you can gather the same customer insights by logging such activity and viewing this in a ‘live streaming' mode. Because logs allow you to record this information, you can obtain a much more analytical understanding of customer trends and even individual customer behavior; enabling you to better position your offering and drive more value for your business. For a product manager or a founder of a SaaS company like myself, it can be addictive to sit and watch your users in real time, as they use new features and interact with your technology.
  • Logs used as simple data structures to build powerful distributed systems. Jay Kreps' article is a must read for every developer interested in understanding the power of the humble log as a simple data structure for solving complex problems.

Logs continue to be one of the fastest growing data sources at organizations today. For example the largest DB hosted at AWS contains machine-generated statistics on AWS itself.

Log Management's Ugly Secret
Organizations manage logs similar to how we managed email in the 1990's.

Organizations are constantly worried about data volumes, exceeding data limits, and incurring unpredictable (and costly) fees. You end up always looking over your shoulder, concerned about your next log management bill, which is almost always based on GB/TB of data you produce.

The constant murmur we hear from organizations is something along the lines of - ‘look don't get me wrong, we love our logs and would find it very difficult to operate our business without them.... BUT it's bloody expensive!'

This cost largely comes in two flavors:

  • Costs associated with the traditional vendors' per GB, pay for everything pricing model can become prohibitive as log volumes increase.
  • Organizations frustrated with this model who look at open source/roll your own solutions often end up in an even more expensive situation. They are left footing the bill for the infrastructure required to run their internal logging cluster, as well as the developer(s) salary required to build and consistently maintain the solution.

Enter Unlimited Logging - Logentries is to logs as Gmail was to Email

unlimited log management

At Logentries we're moving away from charging organizations per GB for everything they log, and instead, want you to send us ALL your log data and not worry about the cost.

Think about how you felt when Gmail came along and you never had to worry about running out of inbox space - they opened a new chapter in how email as a service was delivered; most certainly for the better. At Logentries we are doing the same for our users with our new Unlimited Logging - send us all your data and don't worry about it.

You do not necessarily get 2X the value from your logs when your log volumes double.

Value is more aligned with the type of analysis you can perform and the valuable trends you can extract from your data.

How Unlimited Logging Works
At Logentries we have a fundamentally different perspective:

Log Management and analysis should be simple to use and real time:

  • You should not need to be a data scientist to work with and understand your logs.
  • You should not have to learn a complex search query language to navigate and get value from your logs.
  • Analysis should be performed in real time and you shouldn't have to wait 10 mins to get an alert on an important event that occurred in your system.

At Logentries we have built a technology from more than a decade of research in distributed systems, with a unique pre-processing engine that analyses your data up front, in real time with built-in intelligence, so that you do not need to construct complex search queries. We do the hard work so you don't have to, and we aim to make your log data analysis quick, painless but still super powerful.

Send as much data as you like:

Our unique pre-processing engine can be used to dynamically route your logs for real time analysis, or alternatively, into cloud storage for on-demand analytics. Generally, an organization will have log data that needs to be analyzed immediately, in real time. But organizations also tend to have a lot of data that MAY need to be analyzed at some point in the future - this is where on-demand analytics comes into play.

Traditionally, logging providers have tried to apply a one-size fits all approach.

All your data gets indexed up front- so that they can charge you more per GB for all the log data indexed. At Logentries we let YOU decide what data you want to analyze right now, and what data you want to analyze at some point in the future - on demand.

We allow you to send as much data as you like to cloud storage, and we only charge you for what you actually ingest into the Logentries service for analysis. This provides a very flexible way for organization to significantly reduce and cap their logging costs without having to worry about log ‘inflation' as their systems and business grow - as they invariable do.

Want to check out our unlimited logging? You can get more details on how it works here and how you can start to better manage and cut your logging costs.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant busin...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be looking at some significant engineering investment. On-demand, serverless computing enables developers to try out a fleet of devices on IoT gateways with ease. With a sensor simulator built on top of AWS Lambda, it’s possible to elastically generate device sensors that report their state to the cloud.
CIOs and those charged with running IT Operations are challenged to deliver secure, audited, and reliable compute environments for the applications and data for the business. Behind the scenes these tasks are often accomplished by following onerous time-consuming processes and often the management of these environments and processes will be outsourced to multiple IT service providers. In addition, the division of work is often siloed into traditional "towers" that are not well integrated for cross-functional purposes. So, when traditional IT Service Management (ITSM) meets the cloud, and equally, DevOps, there is invariably going to be conflict.