Welcome!

@DevOpsSummit Authors: Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Open Source Cloud, @CloudExpo

@DevOpsSummit: Blog Post

What Is the Docker Stats API? By @TrevParsons | @DevOpsSummit [#DevOps]

Containerization and microservices are changing how development and operations teams design, build and monitor systems

Containerization and micro-services are changing how development and operations teams design, build and monitor systems. Containerization of environments regularly results in systems with large numbers of dynamic and ephemeral instances that autoscale to meet demands on system load. In fact, it's not uncommon to see thousands of container instances, where once there were hundreds of (cloud) server instances, where once there were tens of physical servers,

From a monitoring perspective this means it's even more difficult to understand what is happening across your environment without centralized monitoring and logging. After all you can not manage what you do not monitor.

what-is-the-docker-stats-api

Enter the Docker stats API. The Docker stats API is part of the new docker 1.5 releaseand is an API endpoint and CLI command that will stream live resource usage information (such as CPU, memory, network IO and block IO) for your containers. The API endpoint can be used to build tools that feed live resource information for your containers into your existing monitoring solutions, or build live dashboards directly using the API.

In fact at we have built a new Docker Logentries container to plug into this endpoint to allow you to stream this data into your Logentries account, along with any Docker logs from your containers. If you want to build your own tool you can check out the source code on this github repo - it might give you some ideas. The container is also available on the Logentries Dockerhub repo. Our friends at nearfrom - leaders in the Node space, the guys behind Europe's biggest (and most fun!) node conference and avid Docker contributors (e.g. the guys behind jsChan) have helped us to develop this container (thanks Peter and Matteo!).

The Docker Logentries container works by mounting the Docker socket to consume both a stream of log data from your docker containers as well as a stream of stats from the docker stats API. The docker stats API will forward the following type of info per container:

{
   "read" : "2015-01-08T22:57:31.547920715Z",
   "network" : {
      "rx_dropped" : 0,
      "rx_bytes" : 648,
      "rx_errors" : 0,
      "tx_packets" : 8,
      "tx_dropped" : 0,
      "rx_packets" : 8,
      "tx_errors" : 0,
      "tx_bytes" : 648
   },
   "memory_stats" : {
      "stats" : {
         "total_pgmajfault" : 0,
         "cache" : 0,
         "mapped_file" : 0,
         "total_inactive_file" : 0,
         "pgpgout" : 414,
         "rss" : 6537216,
         "total_mapped_file" : 0,
         "writeback" : 0,
         "unevictable" : 0,
         "pgpgin" : 477,
         "total_unevictable" : 0,
         "pgmajfault" : 0,
         "total_rss" : 6537216,
         "total_rss_huge" : 6291456,
         "total_writeback" : 0,
         "total_inactive_anon" : 0,
         "rss_huge" : 6291456,
         "hierarchical_memory_limit" : 67108864,
         "total_pgfault" : 964,
         "total_active_file" : 0,
         "active_anon" : 6537216,
         "total_active_anon" : 6537216,
         "total_pgpgout" : 414,
         "total_cache" : 0,
         "inactive_anon" : 0,
         "active_file" : 0,
         "pgfault" : 964,
         "inactive_file" : 0,
         "total_pgpgin" : 477
      },
      "max_usage" : 6651904,
      "usage" : 6537216,
      "failcnt" : 0,
      "limit" : 67108864
   "blkio_stats" : {},
   "cpu_stats" : {
      "cpu_usage" : {
         "percpu_usage" : [
            16970827,
            1839451,
            7107380,
            10571290
         ],
         "usage_in_usermode" : 10000000,
         "total_usage" : 36488948,
         "usage_in_kernelmode" : 20000000
      },
      "system_cpu_usage" : 20091722000000000,
      "throttling_data" : {}
   }
}

You can see that it provides a nice level of visibility across your key server resources and ultimately you can now build a specialised monitoring/logging container (as we have) to give you an easy way to collect both you container logs and metrics and to make them available in a centralized location. Collecting BOTH log data and stats in a single dashboard has the advantage of giving you a multi-dimensional view of your system and is particularly useful for troubleshooting and monitoring of your production docker environments - which has been pretty difficult to achieve up to this point.

To visualise this info you can also take advantage of the Logentries Docker Community pack which provides out of the box dashboards, alerts and tags for your stats API data - providing dashboards and alerts on per container CPU, Memory and Network trends for example.

docker-pack-screenshot

For details on setting up the new Docker Logentries container, see our setup guide. And as always let us know what you think.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
For better or worse, DevOps has gone mainstream. All doubt was removed when IBM and HP threw up their respective DevOps microsites. Where are we on the hype cycle? It's hard to say for sure but there's a feeling we're heading for the "Peak of Inflated Expectations." What does this mean for the enterprise? Should they avoid DevOps? Definitely not. Should they be cautious though? Absolutely. The truth is that DevOps and the enterprise are at best strange bedfellows. The movement has its roots in the tech community's elite. Open source projects and methodologies driven by the alumni of companies like Netflix, Google and Amazon. This is a great thing for the evolution of DevOps. It can be alienating for Enterprise IT though. Learning about Netflix and their simian armies, or Facebook and their mind-melting scale is fascinating. Can you take it back to the office on Monday morning though?
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examining how the Internet and the cloud has allowed for the democratization of IT, resulting in an increased demand for the cloud and the drive to develop new ways to utilize it.
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MySQL cluster as a Kubernetes application.