Welcome!

@DevOpsSummit Authors: John Worthington, Stackify Blog, Elizabeth White, Mehdi Daoudi, Pat Romanski

Related Topics: @DevOpsSummit, Open Source Cloud, Agile Computing

@DevOpsSummit: Blog Post

Six Requirements for Synthetic User Management | @DevOpsSummit #APM #DevOps

Violate at your own risk

You always want to know that your website is operating at its best, but how do you know that's actually the case? It's not so easy to see behind the curtain when it comes to your web infrastructure. We've long used proxy metrics like CPU load or server availability to ensure that a server is "up," but these measurements don't provide enough data. In fact, as websites become more complex and change more frequently, these measurements become less useful.

A website visit may involve a wide range of components many of which are off-site or not easily monitored. External ad servers, web service APIs, content delivery networks, and even specialized back-end systems - each of these represent a potential bottleneck that could impact an important transaction without raising an appropriate red flag.

So what do you have in your toolbox to help address this? Load testing, real-user monitoring and site instrumentation all help you prepare for and monitor your website visitors' experiences. But one more tool that's essential for a performance engineer is synthetic user monitoring. It's a critical part of a web monitoring strategy, however for many people, it's uncharted territory. So, in this post we want to show you what is required for proper synthetic user monitoring.

What Is Synthetic User Monitoring? How Does It Help?
Simply put, synthetic user monitoring lets you create virtual users that operate externally to your system and mimic real user behavior by running through user paths on your website or application, while you measure the performance of the system. You do this while the application is in use, on a live production system. Why? Because that's how you can see what your users are seeing, without requiring real users to execute those tasks.

Take this example: you have a check-out cart on your site - a high-value transaction, and therefore one that deserves a flawless experience. Not everyone gets to the cart. Most people are browsing the rest of the site. But when people do get there, you want to make sure they have an amazing experience.

If you rely only on measuring that experience only when a user is actually checking out, you have no way of knowing what that experience will be like for that user. You put the high-value transaction in jeopardy because you don't have any data about how well it will perform until there is a real person going through it.

This is exactly what synthetic users are for. You build a simulated transaction that mimics a user's most common tasks: add to cart, checkout, log in, etc. As load increases on the production site and more and more visitors get ready to buy, you can continually check to see what the experience is like along those key tasks without putting any individual visitors at risk. That way, you know about problems your users might encounter before they do.

Six Mandatory Requirements for Proper Synthetic User Monitoring
What should you be looking for in a Synthetic User Monitoring tool? Here are six attributes that should definitely be on your list of key requirements for synthetic user monitoring.

1. Support For Complex Application Scenarios and Advanced Protocols
Synthetic users are great for application transactions that really matter, and these user paths are rarely simple. Your synthetic user monitoring solution should include support for interacting with and navigating through a wide range of web technologies: Flash, HTML5, Google SPDY, Push, WebSocket, and any other of the latest web and mobile technologies.

With this support under your belt, you aren't limited in how you put synthetic users to work. Take a look at your web analytics and find out what your most common paths through the site are. Then recreate those in your synthetic user monitoring tool, exactly as your users experience them.

Beyond that, think about how you can leverage synthetic testing for new features, before you let real users in. Deploy a feature on a special build that's running on the same server. Don't direct live users to it yet, but plan some paths for synthetic users. Then at times of peak load, run the synthetic users through the script to see how the experience is.

There are plenty of other ways of leveraging synthetic user monitoring to be more proactive. By thinking about the future first, you'll use synthetic user monitoring to its maximum benefit. Check out more tips here.

2. No-Code Scripting of Test Scenarios
Once it's set up, synthetic user monitoring is a fantastic tool. What holds many people back is being able to write a script that lays out an entire decision and process tree that a user could make. So you want a tool that makes this as easy and frictionless as possible.

As stated above, you want to create scripts that are modeled after real user behavior. A no-coding solution for script development makes this process significantly easier because you work within a graphical interface, putting blocks of functionality together without the pitfalls and complexities of manually written scripts.

You can also incorporate other attributes of user behavior into your scripts - for example, connection speeds and browser behaviors. You can execute scenarios from various geographies for further realism in your testing.

A no-coding solution for test scripts means you can quickly churn out a robust, representative library of tests that will accurately simulate your users.

3. Shared Scripts Between Synthetic User Monitoring and Load Testing
You gain a lot of efficiencies by reusing your load testing cases in your synthetic user monitoring tool. If you think about, there isn't much difference between what you want to test in a load test and what you want to test in synthetic user monitoring. In both cases, you are looking to leverage realistic test scenarios to see how the system behaves before a real user experiences a problem.

So repackage your load tests as synthetic user monitoring tests, and look for a tool that allows you to share them between these different testing environments. You'll want to be able to easily port your load tests into synthetic user monitoring tests, and you may even find that a new synthetic user monitoring scenario would make a really good structured load test. You can often use the same data too, which is a great way to test in production and test in the cloud without putting data at risk.

Your mom always told you to recycle. Here's just another way to do that!

4. Realistic Network Emulation
A good synthetic user test will simulate a real user as accurately as possible, and one key characteristic of that experience is the network. Not everyone connects to the Internet with the same high-quality connection. You'll want a synthetic user monitoring tool that emulates various network speeds (3G, 4G, Wi-Fi) as well as network errors like packet loss and latency.

When everything works smoothly, users are likely to have a good experience. But things don't always go smoothly - that's when errors occur and users complain. How does your application perform in the face of these errors? That's a key question you'll want to ask and one of the ways you can leverage a modern synthetic user monitoring tool.

Introduce errors into your test scenarios to see how your app behaves under stress. If there is a network error along the way, do client apps suddenly start drawing down lots of data as part of a re-sync protocol? What happens when this takes place at scale? The data you collect through your synthetic user monitoring tool has a tremendous amount of value and can help improve the system - and the user experience - in many ways.

5. Emulation of Mobile Devices
If you haven't gotten the memo yet, web users are mobile. You should no longer be thinking about these as two separate environments or even two separate user bases. Today, the rule is "mobile first." So you need to be monitoring both your mobile users and your desktop users as a common set of visitors.

Your synthetic user monitoring tool should have the ability to emulate a wide range of mobile devices so you can determine how those users may be experiencing your website, and particularly if there are any differences between what someone sees on their phone as compared to their computer.

Be sure to consider mobile load testing and monitoring right from the start, when setting up your initial synthetic monitoring and tests. Leverage your analytics data to find out how many users are on mobile and what they are doing. Don't treat this as secondary - today's web users are on their devices, maybe even more so than their computers.

Are you still trying to figure out if a dedicated mobile testing environment is important? Check out our infographic, Mobile-First Performance - it may persuade you.

6. Real-Time Dashboards and Notifications
The synthetic monitoring system creates simulated users within a fully-controllable browser, so the testing system has complete access to all the data inside the browser (unlike real user monitoring, which happens inside a sandboxed javascript instance). The detail that can be garnered from this is staggering, including full waterfall charts, resource-by-resource performance and screenshots/videos of the pageload in action to determine paint times.

Make sure your synthetic user monitoring solution takes advantage of all the information available and makes it accessible through a rich set of dashboards and real-time notifications. You should have access to real-time and historical data, along with the ability to set and monitor key performance indicators (KPIs). You'll also want to configure alerts so your monitoring team can take action when SLAs are violated.

This is a critical requirement, as it turns your synthetic user monitoring tool from a learning system to a doing system. Regular synthetic tests can monitor performance and immediately alert staff to fix a problem before a user experiences it.

Stay Sensible with NeoSense
You can meet these requirements - and then some - with NeoSense. With this monitoring system on your side, you'll be able to work with complex business applications and simulate the most complicated of user paths. Its fast, it's powerful and it can assimilate with the newest technology. Get more information about NeoSense here!

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

@DevOpsSummit Stories
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the public cloud best suits your organization, and what the future holds for operations and infrastructure engineers in a post-container world. Is a serverless world inevitable?
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. This leads to a waste of cloud resources and increased operational overhead.
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buyers learn their thoughts on their experience.
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close their feedback loops to drive continuous improvement.
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual business failure.
"Cloud4U builds software services that help people build DevOps platforms for cloud-based software and using our platform people can draw a picture of the system, network, software," explained Kihyeon Kim, CEO and Head of R&D at Cloud4U, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was joined by a panel of industry experts and real-world practitioners who shared their insight into an emerging set of best practices that lie at the heart of today's digital transformation.
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone innovative products that help customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business and personal computing needs.
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He explained how automation, orchestration and governance are fundamental to managing today's hybrid cloud environments and are critical for digital businesses to deliver services faster, with better user experience and higher quality, all while saving money.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams. In his session at 22nd Cloud Expo | DXWorld Expo, Daniel Jones, CTO of EngineerBetter, will answer: How can we improve willpower and decrease technical debt? Is the present bias real? How can we turn it to our advantage? Can you increase a team’s effective IQ? How do DevOps & Product Teams increase empathy, and what impact does empathy have on productivity?
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol, we have been able to solve many of these problems at the communication layer. This makes it possible to create rich application experiences and support use-cases such as mobile-to-mobile communication and large file transfers that would be difficult or cost-prohibitive with traditional networking.
You know you need the cloud, but you're hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You're looking at private cloud solutions based on hyperconverged infrastructure, but you're concerned with the limits inherent in those technologies. What do you do?
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering management. To date, IBM has launched more than 50 cloud data centers that span the globe. He has been building advanced technology, delivering “as a service” solutions, and managing infrastructure services for the past 20 years.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.