Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Stackify Blog, Dana Gardner

Related Topics: @DevOpsSummit, Mobile IoT

@DevOpsSummit: Article

Testing Tips For Today | @DevOpsSummit #DevOps #IoT #Microservices

HTML5 and CSS3 together were one of of the biggest transformations to happen to web development

Testing Tips For Today's Tech: HTML5, WebSockets, RTMP, Adaptive Bitrate Streaming

One of the most dangerous things that can happen to a development team is complacency. The modern world moves so fast, and new technologies are coming out all the time. If you stop innovating and stop adapting, you'll be sunk. It's critical for web developers to be continuously responding to the changes around them.

The web community is constantly churning out new technologies that make it easier to develop the applications that savvy users are demanding. It may be obvious, but new technologies often require new ways of thinking about testing - sometimes new tools, other times entirely new testing methods. You just can't rely on the same old same old when things change as much as the web does.

This means that you, as a performance tester, must constantly be preparing for the latest the web has to offer. We thought we'd provide some tips on how to conduct performance testing on newer technologies, so you can stay one step ahead of the curve.

HTML5 + CSS3
HTML5 and CSS3 together were one of of the biggest transformations to happen to web development in the past decade. With the introduction of these W3C specifications, finally a common standard for rich media and web interactivity was put in place that applied across all browsers on desktops, laptops, set-tops, and mobile devices.

Since most of what happens in an HTML5 application takes place on the client, much of the performance testing job comes in the form of end-user experience testing. You want to know that your client devices can handle all the processing involved in a high fidelity HTML5 app. For example, here's a processor-heavy client-side HTML5 app that you can use for benchmarking the client-side experience across different devices.

There are, however, some implications of HTML5 that do matter when viewed through the lens of server-side scalability and load testing. These are:

  • The complexity of the app. HTML5 apps are often very dynamic, so they don't necessarily all load at once. Make sure your test scenarios fetch all the necessary sub-components, are running correlated scripts, and are exercising AJAX calls appropriately. These will matter under load.
  • HTML5 also supports server-sent events, so you'll want an infrastructure that can support these events with a normalized method for receiving various types of push notifications and server-sent messages.
  • Finally, you'll want a good way to record traffic back and forth from the server and then replay those communications as a scalable series of simulated users, with the ability to insert randomized dummy data into each app instance. The richness of an HTML5 app can make these traffic patterns more complicated than a legacy app would typically be. You'll want the flexibility in your testing infrastructure and scenario design to fully exercise HTML5 as a realistic simulation of load.

Ultimately, HTML5 has new communication features that you need to accommodate and a higher degree of potential complexity in the apps that are built on the platform. But you'll still want to see all the performance metrics and insights that you do with classic HTML application performance tests.

WebSockets
WebSockets is actually part of the HTML5 specification, but nonetheless it deserves its own callout. WebSocket technology creates a persistent connection between the client and the server. That way a server can push info to a client without waiting to piggy-back on an incoming HTTP request.

As the web grew more and more interactive, companies started using browsers as a platform for low-latency and real-time apps, like running a game, or real-time communications, or notifications. However, since HTTP required every communication to start with a client request, this forced developers and toolkits to create workarounds that simulated server-pushed events, while still fitting in the request-response framework. These solutions worked well, but were inefficient.

With HTML5 and WebSockets, that process became highly efficient. So WebSockets became a great way to push information from the server to the client. However, because they are so speedy and don't require a new connection to be established any time information needs to be communicated in either direction, web developers often use WebSockets as a way of maintaining a fast channel between client and server, even for traditional request-response purposes.

When it comes to performance testing, the key to testing WebSockets is simulating the bi-directional nature of your app. Test scenarios aren't always one-way. They aren't always request-response. They aren't always server-pushed. Applications that employ WebSockets often contain a mix of communication patterns. To build your load test scenarios you'll want to record and playback WebSocket communications with your app to create realistic testing scenarios. You'll also need to handle messages pushed over WebSockets just like you would handle messages pushed using a traditional request-response, piggy-back architecture. Load test variables should include the time it takes to establish a WebSocket connection, as well as the time it takes to send a request over that connection. Finally don't forget to include tests for both text and binary data.

One more thing to keep in mind - a lot of tools designed to test WebSockets are only built to handle push notifications, but the way modern apps are developed, a lot of request-response apps leverage WebSockets anyway. Make sure your load testing tool can deal with both communication protocols.

If your application doesn't take advantage of WebSockets, and if you care about performance, consider checking the technology out. WebSockets are a great way to boost your interactive applications.

Real-Time Messaging Protocol (RTMP)
RTMP was originally developed for streaming audio, video, and data over the Internet, between a server and a Flash player. It was created by Macromedia (now part of Adobe) and was later released as an open specification that's commonly used for Flash and Flex/Air applications. The protocol supports AMF, SWF, FLV, and F4V file formats.

Today, most people agree that the video support included within HTML5 will reduce the need for these file formats. However, because there is so much video out there, and much of it is not HTML5-compliant yet, these traditional formats still carry a lot of weight.

Like WebSockets, RTMP creates persistent connections between a server and a client application written in Flash or Flex/Air. The technology is used to reduce the overhead involved in establishing and tearing down connections for low-latency or highly-interactive apps. Also like WebSockets, performance testing for RTMP is typically focused around the optimizing data that's pushed from the server to the client.

What's unique when testing RTMP apps is that you need testing tools that have the RTMP standard built-in. If your app employs this protocol, you'll want to develop test scenarios that really exercise it. However, RTMP isn't part of the native HTML5 package the way WebSockets is. Your load testing solution should incorporate RTMP directly, and provide a lightweight way of creating virtual users, recording RTMP traffic, processing it appropriately, and playing it back in a realistic way. Your test scenarios will also need to be able to understand and process events that come from the server in order to simulate a diverse and active user population.

Adaptive Bitrate Streaming
All the technologies listed above focus primarily on interactive apps and bi-directional communications between servers and clients. Adaptive bitrate streaming is altogether different. Built directly on HTTP, adaptive streaming detects a user's bandwidth and CPU capacity in real-time, and adjusts the quality of the video stream accordingly. That means you get a different data stream if you are watching a movie on your 4" phone than you would get on a 27" monitor at your desk.

Bitrate streaming has become important because it's a way to reduce buffering and load times for videos. It's all about being able to serve video content immediately, the moment the user asks for it. That's the reason you no longer get those annoying "waiting for video to buffer" messages any more.

The thing about adaptive bitrate streaming is that the interaction between the client and server is very complex, so that makes it a difficult thing to simulate in load tests. Plus there are a lot of different kinds of streaming - that adds a whole other layer of complexity. So you need a load testing tool that first can handle a wide variety of streaming technologies like MPEG/DASH, Adobe Dynamic Streaming for Flash, Apple HTTP Live Streaming, and Microsoft Smooth Streaming.

Second, your testing software should be able to handle a large library of videos. Usually streaming apps involve users accessing a broad set of content. To accurately simulate their behavior, you want to avoid having the same videos get served over and over again. You need to be able to fully exercise the behavior of the population as a whole.

Finally, video streaming is very demanding for a server, but it's not the only thing going on. Apps continue to function in other ways, concurrently while video is streaming. For example, ads get served, related content is browsed, or transactions get executed - all while that video is playing. You'll need to build this into your test scenarios as part of a realistic media load test.

Conclusion
We all know how important it is to stay on top of the latest trends in technology. At Neotys this is something we take very seriously. That's why our team is always looking to see what technologies web developers are using and building those capabilities directly into our products. Have questions about testing a particular technology? We'd love to hear from you.

Photo Credit: Susanne Nilsson

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.