Welcome!

DevOps Journal Authors: Liz McMillan, Pat Romanski, Denise Dubie, CloudCommons 2012, Cynthia Dunlop

Related Topics: Cloud Expo, Java, SOA & WOA, Virtualization, Big Data Journal, SDN Journal

Cloud Expo: Book Excerpt

Book Excerpt: Systems Performance: Enterprise and the Cloud | Part 1

CPUs drive all software and are often the first target for systems performance analysis

"This excerpt is from the book, "Systems Performance: Enterprise and the Cloud", authored by Brendan Gregg, published by Prentice Hall Professional, Oct. 2013, ISBN 9780133390094, Copyright © 2014 Pearson Education, Inc. For more info, please visit the publisher site:

CPUs drive all software and are often the first target for systems performance analysis. Modern systems typically have many CPUs, which are shared among all running software by the kernel scheduler. When there is more demand for CPU resources than there are resources available, process threads (or tasks) will queue, waiting their turn. Waiting can add significant latency during the runtime of applications, degrading performance.

The usage of the CPUs can be examined in detail to look for performance improvements, including eliminating unnecessary work. At a high level, CPU usage by process, thread, or task can be examined. At a lower level, the code path within applications and the kernel can be profiled and studied. At the lowest level, CPU instruction execution and cycle behavior can be studied.

This chapter consists of five parts:

  • Background introduces CPU-related terminology, basic models of CPUs, and key CPU performance concepts.
  • Architecture introduces processor and kernel scheduler architecture.
  • Methodology describes performance analysis methodologies, both observa- tional and experimental.
  • Analysis describes CPU performance analysis tools on Linux- and Solaris- based systems, including profiling, tracing, and visualizations.
  • Tuning includes examples of tunable parameters.

The first three sections provide the basis for CPU analysis, and the last two show its practical application to Linux- and Solaris-based systems.

The effects of memory I/O on CPU performance are covered, including CPU cycles stalled on memory and the performance of CPU caches. Chapter 7, Memory, continues the discussion of memory I/O, including MMU, NUMA/UMA, system interconnects, and memory busses.

Terminology
For reference, CPU-related terminology used in this chapter includes the following:

  • Processor: the physical chip that plugs into a socket on the system or pro- cessor board and contains one or more CPUs implemented as cores or hard- ware threads.
  • Core: an independent CPU instance on a multicore processor. The use of cores is a way to scale processors, called chip-level multiprocessing (CMP).
  • Hardware thread: a CPU architecture that supports executing multiple threads in parallel on a single core (including Intel's Hyper-Threading Tech- nology), where each thread is an independent CPU instance. One name for this scaling approach is multithreading.
  • CPU instruction: a single CPU operation, from its instruction set. There are instructions for arithmetic operations, memory I/O, and control logic.
  • Logical CPU: also called a virtual processor,1 an operating system CPU instance (a schedulable CPU entity). This may be implemented by the processor as a hardware thread (in which case it may also be called a virtual core), a core, or a single-core processor.
  • Scheduler: the kernel subsystem that assigns threads to run on CPUs.
  • Run queue: a queue of runnable threads that are waiting to be serviced by
  • CPUs. For Solaris, it is often called a dispatcher queue.

Other terms are introduced throughout this chapter. The Glossary includes basic terminology for reference, including CPU, CPU cycle, and stack. Also see the terminology sections in Chapters 2 and 3.

Models
The following simple models illustrate some basic principles of CPUs and CPU per- formance. Section 6.4, Architecture, digs much deeper and includes implementation- specific details.

CPU Architecture
Figure 1 shows an example CPU architecture, for a single processor with four cores and eight hardware threads in total. The physical architecture is pictured, along with how it is seen by the operating system.

Figure 1: CPU architecture

Each hardware thread is addressable as a logical CPU, so this processor appears as eight CPUs. The operating system may have some additional knowledge of topology, such as which CPUs are on the same core, to improve its scheduling decisions.

CPU Memory Caches
Processors provide various hardware caches for improving memory I/O perfor- mance. Figure 2 shows the relationship of cache sizes, which become smaller and faster (a trade-off) the closer they are to the CPU.

The caches that are present, and whether they are on the processor (integrated) or external to the processor, depend on the processor type. Earlier processors pro- vided fewer levels of integrated cache.

Figure 2: CPU cache sizes

CPU Run Queues
Figure 3 shows a CPU run queue, which is managed by the kernel scheduler.

Figure 3: CPU run queue

The thread states shown in the figure, ready to run and on-CPU, are covered in Figure 3.7 in Chapter 3, Operating Systems.

The number of software threads that are queued and ready to run is an impor- tant performance metric indicating CPU saturation. In this figure (at this instant) there are four, with an additional thread running on-CPU. The time spent waiting on a CPU run queue is sometimes called run-queue latency or dispatcher-queue latency. In this book, the term scheduler latency is used instead, as it is appropri- ate for all dispatcher types, including those that do not use queues (see the discus- sion of CFS in Section 6.4.2, Software).

For multiprocessor systems, the kernel typically provides a run queue for each CPU and aims to keep threads on the same run queue. This means that threads are more likely to keep running on the same CPUs, where the CPU caches have cached their data. (These caches are described as having cache warmth, and the approach to favor CPUs is called CPU affinity.) On NUMA systems, memory locality may also be improved, which also improves performance (this is described in Chapter 7, Memory).

It also avoids the cost of thread synchronization (mutex locks) for queue operations, which would hurt scalability if the run queue was global and shared among all CPUs.

Concepts
The following are a selection of important concepts regarding CPU performance, beginning with a summary of processor internals: the CPU clock rate and how instructions are executed. This is background for later performance analysis, particularly for understanding the cycles-per-instruction (CPI) metric.

Clock Rate
The clock is a digital signal that drives all processor logic. Each CPU instruction may take one or more cycles of the clock (called CPU cycles) to execute. CPUs exe- cute at a particular clock rate; for example, a 5 GHz CPU performs 5 billion clock cycles per second.

Some processors are able to vary their clock rate, increasing it to improve performance or decreasing it to reduce power consumption. The rate may be varied on request by the operating system, or dynamically by the processor itself. The ker- nel idle thread, for example, can request the CPU to throttle down to save power.

Clock rate is often marketed as the primary feature of the processor, but this can be a little misleading. Even if the CPU in your system appears to be fully utilized (a bottleneck), a faster clock rate may not speed up performance-it depends on what those fast CPU cycles are actually doing. If they are mostly stall cycles while waiting on memory access, executing them more quickly doesn't actually increase the CPU instruction rate or workload throughput.

Instruction
CPUs execute instructions chosen from their instruction set. An instruction includes the following steps, each processed by a component of the CPU called a functional unit:

  1. Instruction fetch
  2. Instruction decode
  3. Execute
  4. Memory access
  5. Register write-back

The last two steps are optional, depending on the instruction. Many instructions operate on registers only and do not require the memory access step.

Each of these steps takes at least a single clock cycle to be executed. Memory access is often the slowest, as it may take dozens of clock cycles to read or write to main memory, during which instruction execution has stalled (and these cycles while stalled are called stall cycles). This is why CPU caching is important, as described in Section 6.4: it can dramatically reduce the number of cycles needed for memory access.

Instruction Pipeline
The instruction pipeline is a CPU architecture that can execute multiple instructions in parallel, by executing different components of different instructions at the same time. It is similar to a factory assembly line, where stages of production can be executed in parallel, increasing throughput.

Consider the instruction steps previously listed. If each were to take a single clock cycle, it would take five cycles to complete the instruction. At each step of this instruction, only one functional unit is active and four are idle. By use of pipe- lining, multiple functional units can be active at the same time, processing differ- ent instructions in the pipeline. Ideally, the processor can then complete one instruction with every clock cycle.

Instruction Width
But we can go faster still. Multiple functional units can be included of the same type, so that even more instructions can make forward progress with each clock cycle. This CPU architecture is called superscalar and is typically used with pipe- lining to achieve a high instruction throughput.

The instruction width describes the target number of instructions to process in parallel. Modern processors are 3-wide or 4-wide, meaning they can complete up to three or four instructions per cycle. How this works depends on the processor, as there may be different numbers of functional units for each stage.

CPI, IPC
Cycles per instruction (CPI) is an important high-level metric for describing where a CPU is spending its clock cycles and for understanding the nature of CPU utilization. This metric may also be expressed as instructions per cycle (IPC), the inverse of CPI.

A high CPI indicates that CPUs are often stalled, typically for memory access. A low CPI indicates that CPUs are often not stalled and have a high instruction throughput. These metrics suggest where performance tuning efforts may be best spent.

Memory-intensive workloads, for example, may be improved by installing faster memory (DRAM), improving memory locality (software configuration), or reducing the amount of memory I/O. Installing CPUs with a higher clock rate may not improve performance to the degree expected, as the CPUs may need to wait the same amount of time for memory I/O to complete. Put differently, a faster CPU may mean more stall cycles but the same rate of completed instructions.

The actual values for high or low CPI are dependent on the processor and processor features and can be determined experimentally by running known work- loads. As an example, you may find that high-CPI workloads run with a CPI at ten or higher, and low CPI workloads run with a CPI at less than one (which is possi- ble due to instruction pipelining and width, described earlier).

It should be noted that CPI shows the efficiency of instruction processing, but not of the instructions themselves. Consider a software change that added an inefficient software loop, which operates mostly on CPU registers (no stall cycles): such a change may result in a lower overall CPI, but higher CPU usage and utilization.

Utilization
CPU utilization is measured by the time a CPU instance is busy performing work during an interval, expressed as a percentage. It can be measured as the time a CPU is not running the kernel idle thread but is instead running user-level application threads or other kernel threads, or processing interrupts.

High CPU utilization may not necessarily be a problem, but rather a sign that the system is doing work. Some people also consider this an ROI indicator: a highly utilized system is considered to have good ROI, whereas an idle system is considered wasted. Unlike with other resource types (disks), performance does not degrade steeply under high utilization, as the kernel supports priorities, preemption, and time sharing. These together allow the kernel to understand what has higher priority, and to ensure that it runs first.

The measure of CPU utilization spans all clock cycles for eligible activities, including memory stall cycles. It may seem a little counterintuitive, but a CPU may be highly utilized because it is often stalled waiting for memory I/O, not just executing instructions, as described in the previous section.

CPU utilization is often split into separate kernel- and user-time metrics.

More Stories By Brendan Gregg

Brendan Gregg, Lead Performance Engineer at Joyent, analyzes performance and scalability throughout the software stack. As Performance Lead and Kernel Engineer at Sun Microsystems (and later Oracle), his work included developing the ZFS L2ARC, a pioneering file system technology for improving performance using flash memory. He has invented and developed many performance tools, including some that ship with Mac OS X and Oracle® Solaris™ 11. His recent work has included performance visualizations for Linux and illumos kernel analysis. He is coauthor of DTrace (Prentice Hall, 2011) and Solaris™ Performance and Tools (Prentice Hall, 2007).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Cloud Expo Breaking News
More and more enterprises today are doing business by opening up their data and applications through APIs. Though forward-thinking and strategic, exposing APIs also increases the surface area for potential attack by hackers. To benefit from APIs while staying secure, enterprises and security architects need to continue to develop a deep understanding about API security and how it differs from traditional web application security or mobile application security. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will walk you through the various aspects of how an API could be potentially exploited. He will discuss the necessary best practices to secure your data and enterprise applications while continue continuing to support your business’s digital initiatives.
Web conferencing in a public cloud has the same risks as any other cloud service. If you have ever had concerns over the types of data being shared in your employees’ web conferences, such as IP, financials or customer data, then it’s time to look at web conferencing in a private cloud. In her session at 14th Cloud Expo, Courtney Behrens, Senior Marketing Manager at Brother International, will discuss how issues that had previously been out of your control, like performance, advanced administration and compliance, can now be put back behind your firewall.
Next-Gen Cloud. Whatever you call it, there’s a higher calling for cloud computing that requires providers to change their spots and move from a commodity mindset to a premium one. Businesses can no longer maintain the status quo that today’s service providers offer. Yes, the continuity, speed, mobility, data access and connectivity are staples of the cloud and always will be. But cloud providers that plan to not only exist tomorrow – but to lead – know that security must be the top priority for the cloud and are delivering it now. In his session at 14th Cloud Expo, Kurt Hagerman, Chief Information Security Officer at FireHost, will detail why and how you can have both infrastructure performance and enterprise-grade security – and what tomorrow's cloud provider will look like.
The social media expansion has shown just how people are eager to share their experiences with the rest of the world. Cloud technology is the perfect platform to satisfy this need given its great flexibility and readiness. At Cynny, we aim to revolutionize how people share and organize their digital life through a brand new cloud service, starting from infrastructure to the users’ interface. A revolution that began from inventing and designing our very own infrastructure: we have created the first server network powered solely by ARM CPU. The microservers have “organism-like” features, differentiating them from any of the current technologies. Benefits include low consumption of energy, making Cynny the ecologically friendly alternative for storage as well as cheaper infrastructure, lower running costs, etc.
The revolution that happened in the server universe over the past 15 years has resulted in an eco-system that is more open, more democratically innovative and produced better results in technically challenging dimensions like scale. The underpinnings of the revolution were common hardware, standards based APIs (ex. POSIX) and a strict adherence to layering and isolation between applications, daemons and kernel drivers/modules which allowed multiple types of development happen in parallel without hindering others. Put simply, today's server model is built on a consistent x86 platform with few surprises in its core components. A kernel abstracts away the platform, so that applications and daemons are decoupled from the hardware. In contrast, networking equipment is still stuck in the mainframe era. Today, networking equipment is a single appliance, including hardware, OS, applications and user interface come as a monolithic entity from a single vendor. Switching between different vendor'...
Cloud backup and recovery services are critical to safeguarding an organization’s data and ensuring business continuity when technical failures and outages occur. With so many choices, how do you find the right provider for your specific needs? In his session at 14th Cloud Expo, Daniel Jacobson, Technology Manager at BUMI, will outline the key factors including backup configurations, proactive monitoring, data restoration, disaster recovery drills, security, compliance and data center resources. Aside from the technical considerations, the secret sauce in identifying the best vendor is the level of focus, expertise and specialization of their engineering team and support group, and how they monitor your day-to-day backups, provide recommendations, and guide you through restores when necessary.
Cloud scalability and performance should be at the heart of every successful Internet venture. The infrastructure needs to be resilient, flexible, and fast – it’s best not to get caught thinking about architecture until the middle of an emergency, when it's too late. In his interactive, no-holds-barred session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will dive into how to design and build-out the right cloud infrastructure.
You use an agile process; your goal is to make your organization more agile. What about your data infrastructure? The truth is, today’s databases are anything but agile – they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver on new features and capabilities needed to make your organization competitive. As your application and business needs change, data repositories and structures get outmoded rapidly, resulting in increased work for application developers and slow performance for end users. Further, as data sizes grow into the Big Data realm, this problem is exacerbated and becomes even more difficult to address. A seemingly simple schema change can take hours (or more) to perform, and as requirements evolve the disconnect between existing data structures and actual needs diverge.
SYS-CON Events announced today that SherWeb, a long-time leading provider of cloud services and Microsoft's 2013 World Hosting Partner of the Year, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. A worldwide hosted services leader ranking in the prestigious North American Deloitte Technology Fast 500TM, and Microsoft's 2013 World Hosting Partner of the Year, SherWeb provides competitive cloud solutions to businesses and partners around the world. Founded in 1998, SherWeb is a privately owned company headquartered in Quebec, Canada. Its service portfolio includes Microsoft Exchange, SharePoint, Lync, Dynamics CRM and more.
The world of cloud and application development is not just for the hardened developer these days. In their session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, and Harold Hannon, Sr. Software Architect at SoftLayer, will pull back the curtain of the architecture of a fun demo application purpose-built for the cloud. They will focus on demonstrating how they leveraged compute, storage, messaging, and other cloud elements hosted at SoftLayer to lower the effort and difficulty of putting together a useful application. This will be an active demonstration and review of simple command-line tools and resources, so don’t be afraid if you are not a seasoned developer.
SYS-CON Events announced today that BUMI, a premium managed service provider specializing in data backup and recovery, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. Manhattan-based BUMI (Backup My Info!) is a premium managed service provider specializing in data backup and recovery. Founded in 2002, the company’s Here, There and Everywhere data backup and recovery solutions are utilized by more than 500 businesses. BUMI clients include professional service organizations such as banking, financial, insurance, accounting, hedge funds and law firms. The company is known for its relentless passion for customer service and support, and has won numerous awards, including Customer Service Provider of the Year and 10 Best Companies to Work For.
Chief Security Officers (CSO), CIOs and IT Directors are all concerned with providing a secure environment from which their business can innovate and customers can safely consume without the fear of Distributed Denial of Service attacks. To be successful in today's hyper-connected world, the enterprise needs to leverage the capabilities of the web and be ready to innovate without fear of DDoS attacks, concerns about application security and other threats. Organizations face great risk from increasingly frequent and sophisticated attempts to render web properties unavailable, and steal intellectual property or personally identifiable information. Layered security best practices extend security beyond the data center, delivering DDoS protection and maintaining site performance in the face of fast-changing threats.
From data center to cloud to the network. In his session at 3rd SDDC Expo, Raul Martynek, CEO of Net Access, will identify the challenges facing both data center providers and enterprise IT as they relate to cross-platform automation. He will then provide insight into designing, building, securing and managing the technology as an integrated service offering. Topics covered include: High-density data center design Network (and SDN) integration and automation Cloud (and hosting) infrastructure considerations Monitoring and security Management approaches Self-service and automation
In his session at 14th Cloud Expo, David Holmes, Vice President at OutSystems, will demonstrate the immense power that lives at the intersection of mobile apps and cloud application platforms. Attendees will participate in a live demonstration – an enterprise mobile app will be built and changed before their eyes – on their own devices. David Holmes brings over 20 years of high-tech marketing leadership to OutSystems. Prior to joining OutSystems, he was VP of Global Marketing for Damballa, a leading provider of network security solutions. Previously, he was SVP of Global Marketing for Jacada where his branding and positioning expertise helped drive the company from start-up days to a $55 million initial public offering on Nasdaq.
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 14th Cloud Expo, Marc Jones, Vice President of Product Innovation for SoftLayer, will explain how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence.