Welcome!

@DevOpsSummit Authors: Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz, Aruna Ravichandran

Related Topics: @DevOpsSummit, Open Source Cloud, Agile Computing

@DevOpsSummit: Blog Post

Six Requirements for Synthetic User Management | @DevOpsSummit #APM #DevOps

Violate at your own risk

You always want to know that your website is operating at its best, but how do you know that's actually the case? It's not so easy to see behind the curtain when it comes to your web infrastructure. We've long used proxy metrics like CPU load or server availability to ensure that a server is "up," but these measurements don't provide enough data. In fact, as websites become more complex and change more frequently, these measurements become less useful.

A website visit may involve a wide range of components many of which are off-site or not easily monitored. External ad servers, web service APIs, content delivery networks, and even specialized back-end systems - each of these represent a potential bottleneck that could impact an important transaction without raising an appropriate red flag.

So what do you have in your toolbox to help address this? Load testing, real-user monitoring and site instrumentation all help you prepare for and monitor your website visitors' experiences. But one more tool that's essential for a performance engineer is synthetic user monitoring. It's a critical part of a web monitoring strategy, however for many people, it's uncharted territory. So, in this post we want to show you what is required for proper synthetic user monitoring.

What Is Synthetic User Monitoring? How Does It Help?
Simply put, synthetic user monitoring lets you create virtual users that operate externally to your system and mimic real user behavior by running through user paths on your website or application, while you measure the performance of the system. You do this while the application is in use, on a live production system. Why? Because that's how you can see what your users are seeing, without requiring real users to execute those tasks.

Take this example: you have a check-out cart on your site - a high-value transaction, and therefore one that deserves a flawless experience. Not everyone gets to the cart. Most people are browsing the rest of the site. But when people do get there, you want to make sure they have an amazing experience.

If you rely only on measuring that experience only when a user is actually checking out, you have no way of knowing what that experience will be like for that user. You put the high-value transaction in jeopardy because you don't have any data about how well it will perform until there is a real person going through it.

This is exactly what synthetic users are for. You build a simulated transaction that mimics a user's most common tasks: add to cart, checkout, log in, etc. As load increases on the production site and more and more visitors get ready to buy, you can continually check to see what the experience is like along those key tasks without putting any individual visitors at risk. That way, you know about problems your users might encounter before they do.

Six Mandatory Requirements for Proper Synthetic User Monitoring
What should you be looking for in a Synthetic User Monitoring tool? Here are six attributes that should definitely be on your list of key requirements for synthetic user monitoring.

1. Support For Complex Application Scenarios and Advanced Protocols
Synthetic users are great for application transactions that really matter, and these user paths are rarely simple. Your synthetic user monitoring solution should include support for interacting with and navigating through a wide range of web technologies: Flash, HTML5, Google SPDY, Push, WebSocket, and any other of the latest web and mobile technologies.

With this support under your belt, you aren't limited in how you put synthetic users to work. Take a look at your web analytics and find out what your most common paths through the site are. Then recreate those in your synthetic user monitoring tool, exactly as your users experience them.

Beyond that, think about how you can leverage synthetic testing for new features, before you let real users in. Deploy a feature on a special build that's running on the same server. Don't direct live users to it yet, but plan some paths for synthetic users. Then at times of peak load, run the synthetic users through the script to see how the experience is.

There are plenty of other ways of leveraging synthetic user monitoring to be more proactive. By thinking about the future first, you'll use synthetic user monitoring to its maximum benefit. Check out more tips here.

2. No-Code Scripting of Test Scenarios
Once it's set up, synthetic user monitoring is a fantastic tool. What holds many people back is being able to write a script that lays out an entire decision and process tree that a user could make. So you want a tool that makes this as easy and frictionless as possible.

As stated above, you want to create scripts that are modeled after real user behavior. A no-coding solution for script development makes this process significantly easier because you work within a graphical interface, putting blocks of functionality together without the pitfalls and complexities of manually written scripts.

You can also incorporate other attributes of user behavior into your scripts - for example, connection speeds and browser behaviors. You can execute scenarios from various geographies for further realism in your testing.

A no-coding solution for test scripts means you can quickly churn out a robust, representative library of tests that will accurately simulate your users.

3. Shared Scripts Between Synthetic User Monitoring and Load Testing
You gain a lot of efficiencies by reusing your load testing cases in your synthetic user monitoring tool. If you think about, there isn't much difference between what you want to test in a load test and what you want to test in synthetic user monitoring. In both cases, you are looking to leverage realistic test scenarios to see how the system behaves before a real user experiences a problem.

So repackage your load tests as synthetic user monitoring tests, and look for a tool that allows you to share them between these different testing environments. You'll want to be able to easily port your load tests into synthetic user monitoring tests, and you may even find that a new synthetic user monitoring scenario would make a really good structured load test. You can often use the same data too, which is a great way to test in production and test in the cloud without putting data at risk.

Your mom always told you to recycle. Here's just another way to do that!

4. Realistic Network Emulation
A good synthetic user test will simulate a real user as accurately as possible, and one key characteristic of that experience is the network. Not everyone connects to the Internet with the same high-quality connection. You'll want a synthetic user monitoring tool that emulates various network speeds (3G, 4G, Wi-Fi) as well as network errors like packet loss and latency.

When everything works smoothly, users are likely to have a good experience. But things don't always go smoothly - that's when errors occur and users complain. How does your application perform in the face of these errors? That's a key question you'll want to ask and one of the ways you can leverage a modern synthetic user monitoring tool.

Introduce errors into your test scenarios to see how your app behaves under stress. If there is a network error along the way, do client apps suddenly start drawing down lots of data as part of a re-sync protocol? What happens when this takes place at scale? The data you collect through your synthetic user monitoring tool has a tremendous amount of value and can help improve the system - and the user experience - in many ways.

5. Emulation of Mobile Devices
If you haven't gotten the memo yet, web users are mobile. You should no longer be thinking about these as two separate environments or even two separate user bases. Today, the rule is "mobile first." So you need to be monitoring both your mobile users and your desktop users as a common set of visitors.

Your synthetic user monitoring tool should have the ability to emulate a wide range of mobile devices so you can determine how those users may be experiencing your website, and particularly if there are any differences between what someone sees on their phone as compared to their computer.

Be sure to consider mobile load testing and monitoring right from the start, when setting up your initial synthetic monitoring and tests. Leverage your analytics data to find out how many users are on mobile and what they are doing. Don't treat this as secondary - today's web users are on their devices, maybe even more so than their computers.

Are you still trying to figure out if a dedicated mobile testing environment is important? Check out our infographic, Mobile-First Performance - it may persuade you.

6. Real-Time Dashboards and Notifications
The synthetic monitoring system creates simulated users within a fully-controllable browser, so the testing system has complete access to all the data inside the browser (unlike real user monitoring, which happens inside a sandboxed javascript instance). The detail that can be garnered from this is staggering, including full waterfall charts, resource-by-resource performance and screenshots/videos of the pageload in action to determine paint times.

Make sure your synthetic user monitoring solution takes advantage of all the information available and makes it accessible through a rich set of dashboards and real-time notifications. You should have access to real-time and historical data, along with the ability to set and monitor key performance indicators (KPIs). You'll also want to configure alerts so your monitoring team can take action when SLAs are violated.

This is a critical requirement, as it turns your synthetic user monitoring tool from a learning system to a doing system. Regular synthetic tests can monitor performance and immediately alert staff to fix a problem before a user experiences it.

Stay Sensible with NeoSense
You can meet these requirements - and then some - with NeoSense. With this monitoring system on your side, you'll be able to work with complex business applications and simulate the most complicated of user paths. Its fast, it's powerful and it can assimilate with the newest technology. Get more information about NeoSense here!

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

@DevOpsSummit Stories
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous architectural and coordination work to minimize the volatility of the cloud environment and leverage the security features of the cloud to the benefit of the CICD pipeline.
Microsoft Azure Container Services can be used for container deployment in a variety of ways including support for Orchestrators like Kubernetes, Docker Swarm and Mesos. However, the abstraction for app development that support application self-healing, scaling and so on may not be at the right level. Helm and Draft makes this a lot easier. In this primarily demo-driven session at @DevOpsSummit at 21st Cloud Expo, Raghavan "Rags" Srinivas, a Cloud Solutions Architect/Evangelist at Microsoft, will cover Docker Swarm and Kubernetes deployments on Azure with some simple examples. He will look at Helm and Draft and how they can simplify app development significantly, like app scaling, rollback, etc. Helm is a tool that streamlines installing and managing Kubernetes applications, like the apt/yum/homebrew for Kubernetes. Draft works with pre-provided charts to deploy the apps via Helm.
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with legacy components, and also go over automated capabilities provided by operators to auto-update Kubernetes with zero downtime for current and secure deployments.
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed. Sometimes in order to reduce complexity teams compromise features or change requirements
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual business failure.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we’ve lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the ...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering management. To date, IBM has launched more than 50 cloud data centers that span the globe. He has been building advanced technology, delivering “as a service” solutions, and managing infrastructure services for the past 20 years.
SYS-CON Events announced today that Ryobi Systems will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ryobi Systems Co., Ltd., as an information service company, specialized in business support for local governments and medical industry. We are challenging to achive the precision farming with AI. For more information, visit http://www.ryobi-sol.co.jp/en/.
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: implement always-vigilant DNS security
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using new techniques such as feature flagging, rollouts, and traffic splitting, experimentation is no longer just the future for marketing teams, it’s quickly becoming an essential practice for high-performing development teams as well.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and business benefits that cloud bursting provides, including increased compute capacity, lower IT investment, financial agility, and, ultimately, faster time-to-market.
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we've helped public, private and nonprofit organizations implement technology solutions that speed and simplify their operations. As one of the fastest growing IT solution providers in the country, we have gained a reputation for effortless implementations with relentless follow-through and enduring support.
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating systems, programming languages and microprocessors. Their elite team has collectively earned dozens of patents, three film credits and grown record setting businesses. And collectively, they've shipped more than 2 billion licensed products. They are difference makers who have a reputation for delivering innovative products and accomplishing what many others don't believe is even possible. They are ...
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their associated CPUs, memory storage and network) into one or more large servers capable of handling the biggest Big Data problems and most unpredictable workloads.
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching of virtual storage services to its enterprise market.
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant that knows everything and can respond to your emotions and verbal commands!
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo Silicon Valley which will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is at the intersection of technology and business-optimizing tools, organizations and processes to bring measurable improvements in productivity and profitability," said Aruna Ravichandran, vice president, DevOps product and solutions marketing, CA Technologies. "It's this results-driven combination of technology and business that makes me so passionate about DevOps and its future in the industry. I am truly honored to take on this co-chair role, and look forward to working with the DevOps Summit team at Cloud Expo and attendees to advance DevOps."