Welcome!

@DevOpsSummit Authors: Derek Weeks, Karthick Viswanathan, Gopala Krishna Behara, Sridhar Chalasani, Tirumala Khandrika

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Real User #Monitoring | @DevOpsSummit #APM #DevOps #ContinuousDelivery

Enterprises are interested in understanding how they analyze performance to positively impact business metrics

With online viewership and sales growing rapidly, enterprises are interested in understanding how they analyze performance to positively impact business metrics. Deeper insight into the user experience is needed to understand why conversions are dropping and/or bounce rates are increasing or, preferably, to understand what has been helping these metrics improve.

The digital performance management industry has evolved as application performance management companies have broadened their scope beyond synthetic testing that simulates users loading specific pages at regular intervals to include web and mobile testing, and real user monitoring (RUM).  As synthetic monitoring gained popularity, performance engineers realized the variations that exist from real end users were not being captured. This led to the introduction of RUM - the process of capturing, analyzing and reporting data from a real end user's interaction with a website. RUM has been around for more than a decade, but the technology is still in its infancy.

Five factors contributing to the shift towards RUM to complement synthetic testing

Ability to measure third-party resources
Websites are complex, with many different resources affecting performance. While there is no way to reliably detect the number of third party scripts, the number of third-party components is growing, with the average web page now requesting over 30% of their resources from third party domains, as shown in Figure 1. These components have multiple purposes, including   tracking users, ad insertion, and  A/B testing. Understanding the impact these components have on the end user experience is critical.

Figure 1 - Growth in third party vs first party resources per page, 2011-2015

Mobile matters
With more users accessing applications primarily on mobile devices, understanding mobile performance is increasingly important. Metrics must be captured from desktop and mobile devices alike. Just because an application performs well on a desktop does not mean it will perform well on a mobile device. If you have or want to have mobile customers, ensure you are able to capture metrics from them. Mobile presents unique challenges, such as congestion and latency, that can have significant impacts on page performance.

With a growing  mobile user base, RUM is frequently correlated with bandwidth measured in the last mile, to determine whether the impact to performance is a result of unpredictable last mile conditions. This need is increasingly seen in many major Asian economies, where a large proportion of consumers' primary means of internet access is a mobile phone. Major eCommerce players in Asia report over 65% of transactions are made from mobile devices. With such a big customer base, monitoring performance on the mobile web and understanding the influence of carrier impact on performance is critical to doing business. Some businesses have therefore instrumented ability to profile expected levels of user experience as it relates to carrier impact on performance.

Validate performance for specific users or geographies
Synthetic measurements may not be available from all geographies. To understand why a service level agreement in a specific region is not being met, the only way to capture information may be through real users in that geographic location. Real user measurements also enable customers to validate whether issues reported by synthetic testing are widespread across user base or localized to geos or local to the synthetic test tools.

Continuous Delivery
As more organizations move to a continuous delivery model, synthetic tests may need to be frequently re-scripted. As the time to deliver and release content decreases, organizations are looking at ways to quickly gather performance data. Some have decided the fastest way to gather performance metrics on a just-released page or feature is through data from real users.

Native applications
As organizations evolve from mobile websites to native apps, the need to gather metrics from these applications becomes increasingly important.

What features should you look for in a RUM solution?
Knowing that you need a RUM solution is the first step.   The second step is identifying what features are required to meet your business needs.  With a variety of solutions available in the market, identifying the must-have and the nice-to-have features is important to find the best fit.  Here are a few features you should consider.

Real-time and actionable data
Most RUM tools  display insights in the dashboard for the user in near real-time.  This information can be coupled with near real time tracking information from business analytics tools like Google Analytics. Performance data from RUM solutions should be cross-checked against metrics such as site visits, conversions,user location and device/browser insights. Many website operators continuously monitor any changes in the business metrics since they are indicative of problems in performance; further, it enables them to minimize false positives or isolated issues in performance.

User experience timings
Trends in performance optimization testing have  moved away from metrics like time to first byte (TTFB) and page load towards measurements more accurately reflecting the user experience - such as start render and speed index.  A user does not necessarily care when the content on the bottom of the page has loaded - when critical resources have been loaded and the page appears usable is what matters. Ensure the metrics you are gathering accurately reflect what you are attempting to measure and optimize.

Granular information
While page-level metrics are a good start, they don't reveal  precisely what resources are causing content to load slowly, nor  the relevance of each metric. Combining resource timing on specific elements with where the resource is (above or below "the fold") can help organizations filter out the noise and collect actionable information. Intersection Observer can help you identify which resources are loading above or below the fold and prioritize what to do to remedy the impact.

Impact of ads
With large numbers of pages being populated with ads, understanding the impact of the ads is important. RUM tools can identify both the performance impact of an ad in terms of when the ad was fetched and how long it took to download, as well as user engagement - such as how many users watched a video ad in its entirety.

Correlation to business metrics
While there have been many articles describing the impact of performance on business in eCommerce companies - for example, impact on conversions - the same isn't true for media companies. Media companies are more interested in scroll depth, virality of content, and session length.  Soasta recently announced an Activity Impact Score as a way to correlate web performance to session length. Measurements like the Activity Impact Score help non-eCommerce companies measure and monitor engagement and how performance can negatively or positively impact user engagement. Further, with bonuses tied to metrics such as page views, organizations are increasingly scrutinizing RUM metrics and insist on verifying the integrity of these tools.

End device support & ease of measurement
With the plethora of device types and browsers on the market, you need to ensure the RUM solution implemented will capture traffic from the majority of your users. In some Asian countries, over 35% of browsers and devices are unknown, which presents an interesting challenge: should you just forget about these users, or find a way to reliably measure performance on these unknown devices?

Another important factor to consider is how easy is it to enable RUM measurements? Does it require manual instrumentation of every web page or is this automatically done by injection of a script?

End to end perspective
Frequently the performance issues can be anywhere in the delivery network or end user. The ability to zero in on the problem quickly requires correlation of metrics from the end user, last mile, delivery network and the server.

Dynamic thresholds and alerts
The connectivity of an end user's device can change throughout the day. At work, they may be browsing the internet on a high-speed connection; on the commute home, they may be on their mobile device with high latency and congestion; and at night, they may be at home on a DSL or fiber connection. Expecting the same level of performance at all times is unrealistic. Having the ability to set variable thresholds is more indicative of the real user experience.

What solutions exist today
In addition to commercial solutions like Soasta, New Relic, and Google Analytics' Site Speed, there are three specifications from the W3C that enable you to build your own solution - navigation timing, resource timing and user timing. Browser support for these specifications vary, with navigation timing having the greatest adoption, since it has been available the longest.

Navigation timing captures the timing of various events as a page loads, from the HTTP request until all content has been received, parsed, and executed by the browser. This provides high-level information on the overall page load time and can be used to get details on items such as DNS lookups and latency.

Figure 2 shows the various timings available from the navigation timing API:

Figure 2 - Navigation timing events

Among many metrics that can be computed using the navigation timing events, the following are most often used:

  • TimeToFirstByte = responseStart - requestStart
  • TimeToInteractive = domInteractive - requestStart
  • TimeToPageLoad = loadEventEnd - requestStart

While page-level information is helpful, you may want to know how various resources on a page perform. This is where the resource timing specification comes in. Resource timing enables you to collect complete timing information for any resource within a page,with some restrictions for security purposes.  The resource timings available for the request and response are shown in Figure 3.

Figure 3 - Resource timing events

Once resource and navigation timing specifications were available for all resources, the next step was to provide the ability to gather custom metrics to understand where an application is spending the most time. The user timing specification allows marks to be inserted in code enabling the  measurement of time deltas between various marks. This makes it possible to determine information like when a hero image is displayed, when fonts are loaded, and when scripts are done blocking.

Evolving quality measurements
As quality measurements evolve, they will become better at providing actionable insights that recommend specific improvements to mitigate performance bottlenecks - not only at the browser end point, but from an end-to-end perspective.

Increasingly, RUM measurements will leverage machine learning to more deeply understand traffic patterns and dynamically adapt to  changing patterns.

RUM measurements will evolve to include the time a given resource starts to execute and completes execution in the browser.

Also, device-agnostic solutions will no doubt emerge. Metrics need to be captured across the entire spectrum of user endpoints. Not gathering statistics from large percentages of users whose browsers don't support the technology leaves gaping blind spots in the visibility you have on the end user experience.

*    *    *

RUM gives organizations the ability to isolate and identify the cause of performance degradation in a web application, whether it is related to the browser, third-party content, the network provider, the CDN, or infrastructure. RUM is a piece of the puzzle; when used in conjunction with other tools and analytics, it can be used  to quickly recommend web application optimizations.

More Stories By Krishnan Manjeri

Krishnan is a seasoned product manager and is currently a Director of Product Management at InstartLogic responsible for Data Platform, Analytics and Performance. He has nearly 2 decades of experience in leading & delivering solutions, in various capacities from Engineering to Marketing and Product Management, for a variety of fortune 500 companies in the areas of Analytics, Telecommunication Networks, Application Delivery and Security. He has extensive experience leading cross-functional teams and delivering multi-million dollars in revenue in both the Enterprise and Service Provider. He has an MS in Computer Science from Case Western Reserve University and an MBA from Santa Clara University. He has a couple of patents in the area of Networking and Security.

@DevOpsSummit Stories
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering management. To date, IBM has launched more than 50 cloud data centers that span the globe. He has been building advanced technology, delivering “as a service” solutions, and managing infrastructure services for the past 20 years.
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we’ve lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the ...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex to learn. This is because Kubernetes is more of a toolset than a ready solution. Hence it’s essential to know when and how to apply the appropriate Kubernetes constructs.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 20% faster time-to-market, 22% improvement in quality, and 18% reduction in dev and ops costs, according to research firm Vanson-Bourne. It is changing the way IT works, how businesses interact with customers, and how organizations are buying, building, and delivering software.
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud.
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on quality and value. Based in Bellevue, Washington, T-Mobile US provides services through its subsidiaries and operates its flagship brands, T-Mobile and MetroPCS. For more information, visit https://www.t-mobile.com.
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness. For more information, please visit https://www.cedexis.com.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In their session at 21st Cloud Expo, Jenny Hung, E2E Engineer Manager at Yahoo Gemini, Haoran Zhao, Software Engineer at Oath Gemini, and Lin Zhang, Software Engineer at Oath (Yahoo), will describe the technical challenges and the principles we followed to build a reliable and scalable test automation infrastructure across desktops, mobile apps, and mobile web platforms on the cloud. We also share some...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lavi, a Nutanix DevOps Solution Architect, explored the ways that Nutanix technologies empower teams to react faster than ever before and connect teams in ways that were either too complex or simply impossible with traditional infrastructures.
SYS-CON Events announced today that Vivint to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. As a leading smart home technology provider, Vivint offers home security, energy management, home automation, local cloud storage, and high-speed Internet solutions to more than one million customers throughout the United States and Canada. The end result is a smart home solution that saves you time and money and ultimately simplifies your life.
SYS-CON Events announced today that Opsani will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Opsani is the leading provider of deployment automation systems for running and scaling traditional enterprise applications on container infrastructure.
SYS-CON Events announced today that Nirmata will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nirmata provides a comprehensive platform, for deploying, operating, and optimizing containerized applications across clouds, powered by Kubernetes. Nirmata empowers enterprise DevOps teams by fully automating the complex operations and management of application containers and its underlying resources. Nirmata not only simplifies deployment and management of Kubernetes clusters but also facilitates delivery and operations of applications by continuously monitoring the application and infrastructure for changes, and auto-tuning the application based on pre-defined policies. Using Nirmata, enterprises can accelerate their journey towards becoming cloud-native.
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.
SYS-CON Events announced today that Opsani to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. Opsani is creating the next generation of automated continuous deployment tools designed specifically for containers. How is continuous deployment different from continuous integration and continuous delivery? CI/CD tools provide build and test. Continuous Deployment is the means by which qualified changes in software code or architecture are automatically deployed to production as soon as they are ready. Adding continuous deployment to your toolchain is the final step to providing push button deployment for your developers.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations and optimization to employee training and insights, all ultimately create the best customer experience both online and in-store.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous architectural and coordination work to minimize the volatility of the cloud environment and leverage the security features of the cloud to the benefit of the CICD pipeline.