Welcome!

@DevOpsSummit Authors: Derek Weeks, Karthick Viswanathan, Gopala Krishna Behara, Sridhar Chalasani, Tirumala Khandrika

Related Topics: @DevOpsSummit, Java IoT, Linux Containers

@DevOpsSummit: Blog Feed Post

Getting Real About Memory Leaks | @DevOpsSummit #APM #DevOps #ContinuousTesting

Modern programming languages tend to separate the programmer from memory management

Getting Real About Memory Leaks
By Matt Heusser

Modern programming languages tend to separate the programmer from memory management; Java programmers don't have to deal with pointers; they just declare variables and let the built-in garbage collector do its thing.

These garbage collectors are smart, but not perfect; they typically work by object reference. When all the references to an object go out of memory, that object can go out of memory too. Yet if two objects point to each other, they will always have a reference count, and never go away. That means the code written in Javascript to run in a browser, or Objective-C to run on a hand-held phone, or ASP to run on a server could very well have a memory leak.

When memory leaks happen on modern Operating Systems, the Operating System isn't going to throw an error. Instead, it will just use more and more memory, eventually going to virtual memory on the disk. When that happens, performance falls apart. People in operations reboot the server and that "seems to fix it"; users close and restart their browser, or reboot their phone.

Consider the Following Example
My first job in software was to create an electronic audit of a telecommunications bill. Some other software, maintained by my coworker Jeff, would read accounts, phones, and calls from a database and then create a text file that could be interpreted by a printer, called a post-script file. If we were smart, we'd have had written the tool in two parts, first to create a bunch of excel-like text files, then a second piece to combine those into bills.

One day, about four months in, the President of our 30-person software company came downstairs to my desk and said "Matt, I want you to take over the telecom bills", doing maintenance and new feature development.

The program was written in C. It read a list of accounts and looped through every account. Not every active account, every account, because an inactive account could still have an unpaid balance. We calculated the balance by adding up all the debits and credits through all time. If that amount added up to zero, we could skip the account. That was no big deal when we were helping a new, competitive regional carrier get started, with forty or fifty accounts and billing for five or six months.

The competitive carrier, however, was good at, well, competing. By the time I took over they had eighteen hundred accounts, and would be in the tens of thousands (and three states) before I left. Performance would become a real problem, but not for the reasons I expected.

The first two problems that hit us were a pointer out of range, then a memory leak.

Memory Out of Bounds
The software needed to store a great deal of data in memory for each account (the buildings, phones, calls, costs, and taxes) then spit it out to disk in post-script format. There are two basic ways to do this in C: Either allocate memory with pointers, which is hard to track, or simulate it using arrays. We didn't have objects, Test Driven Development wasn't really a thing, and I was inheriting code designed to support a very small customer base. What we did have was arrays. So to support two thousand calls, we could do something like this:

struct call {

char inOrOutbound;

datetimestamp timeStart;

datetimestamp timeEnd;

integer billedMinutes;

double cost;

}

char buildings  [200][200];

char phones [200][200][1000][1000];

call calls [200][200][1000][1000];

What's represented here is up to 200 buildings per account, each of which can have 200 phones, each of which can have up to 1000 calls. The variable calls [5][10][25] would contain the fifth call for the tenth phone at the twenty fifth building. Actually, since C is zero-based, it would be the 6th call at the third phone at the eleventh building.

The software was basically a loop. For each account, for each building, for each phone, for each call. The calls needed to be sorted, then the results printed. Everything worked fine until in one run I got a memory out of bounds error. That's probably because the code was trying to access building 201, phone 201, or call 1001. The compiler has range checking built-in by default. To get around this, I moved the size of the data structure up as large as possible, which worked until we got an account that was too big for that.

Then I turned off range checking and declared a large amount of empty data structures, matching the originals in type, below the originals. That is an incredibly foolish thing to do, but it worked. Because the data structures were built into the compiled program (the ".exe file") and would be re-used for every account. Our little program would declare all the memory it needed one time, at the beginning, and use that.

Until we found the system slowing to a crawl on big accounts, like it was choking.

Enter The Memory Leaks
The data structures above are an over-simplification. We also needed to store the addresses of the building, the payments since the last bill, and so on. Like the examples above some of these were declared variables, put in the compiled code, "on the stack". Others were grabbed at run-time, using a C program call named "malloc" and its friend "free." Instead of declaring an array, we'd declare a character pointer, then malloc() some space at that pointer. When the function was done, we could free() the disk space.

If we remembered to do that.

As you've probably guessed, we had too many calls to malloc() and not enough to free(). That meant that memory use slowly crept up as we used the software. That's fine for fifty or even five hundred accounts; the memory use would go away when the program finished running. But, we started to get some very big accounts. I noticed that one big account would run, and performance would go down. The kicker, for me, was an out of memory error for our user. I went to Rich, the sysadmin, and asked for more memory for the user. He suggested I tune it instead. After carefully poring over the code, I found some mallocs() that weren't being freed. Adding the calls to free, the program did eventually finish. That account still took too long, but that's enough for today.

You might see how I struggled, but you might not see what that has to do with modern software delivery. Allow me to share just a bit more.

The Lesson for Today
These problems are silent killers, and fixing them is no longer as easy as searching through a program making sure every malloc() has a matching free().

If you write embedded medical software, or avionics, or anything in C, you are probably very familiar with these problems. If you don't, you might not be, but the problems are still there.

If your application degrades over time, the problem might just be memory leaks. The next logical question is how to find them.

Read the original blog entry...

More Stories By SmartBear Blog

As the leader in software quality tools for the connected world, SmartBear supports more than two million software professionals and over 25,000 organizations in 90 countries that use its products to build and deliver the world’s greatest applications. With today’s applications deploying on mobile, Web, desktop, Internet of Things (IoT) or even embedded computing platforms, the connected nature of these applications through public and private APIs presents a unique set of challenges for developers, testers and operations teams. SmartBear's software quality tools assist with code review, functional and load testing, API readiness as well as performance monitoring of these modern applications.

@DevOpsSummit Stories
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering management. To date, IBM has launched more than 50 cloud data centers that span the globe. He has been building advanced technology, delivering “as a service” solutions, and managing infrastructure services for the past 20 years.
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we’ve lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the ...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex to learn. This is because Kubernetes is more of a toolset than a ready solution. Hence it’s essential to know when and how to apply the appropriate Kubernetes constructs.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 20% faster time-to-market, 22% improvement in quality, and 18% reduction in dev and ops costs, according to research firm Vanson-Bourne. It is changing the way IT works, how businesses interact with customers, and how organizations are buying, building, and delivering software.
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud.
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on quality and value. Based in Bellevue, Washington, T-Mobile US provides services through its subsidiaries and operates its flagship brands, T-Mobile and MetroPCS. For more information, visit https://www.t-mobile.com.
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness. For more information, please visit https://www.cedexis.com.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In their session at 21st Cloud Expo, Jenny Hung, E2E Engineer Manager at Yahoo Gemini, Haoran Zhao, Software Engineer at Oath Gemini, and Lin Zhang, Software Engineer at Oath (Yahoo), will describe the technical challenges and the principles we followed to build a reliable and scalable test automation infrastructure across desktops, mobile apps, and mobile web platforms on the cloud. We also share some...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lavi, a Nutanix DevOps Solution Architect, explored the ways that Nutanix technologies empower teams to react faster than ever before and connect teams in ways that were either too complex or simply impossible with traditional infrastructures.
SYS-CON Events announced today that Vivint to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. As a leading smart home technology provider, Vivint offers home security, energy management, home automation, local cloud storage, and high-speed Internet solutions to more than one million customers throughout the United States and Canada. The end result is a smart home solution that saves you time and money and ultimately simplifies your life.
SYS-CON Events announced today that Opsani will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Opsani is the leading provider of deployment automation systems for running and scaling traditional enterprise applications on container infrastructure.
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.
SYS-CON Events announced today that Nirmata will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nirmata provides a comprehensive platform, for deploying, operating, and optimizing containerized applications across clouds, powered by Kubernetes. Nirmata empowers enterprise DevOps teams by fully automating the complex operations and management of application containers and its underlying resources. Nirmata not only simplifies deployment and management of Kubernetes clusters but also facilitates delivery and operations of applications by continuously monitoring the application and infrastructure for changes, and auto-tuning the application based on pre-defined policies. Using Nirmata, enterprises can accelerate their journey towards becoming cloud-native.
SYS-CON Events announced today that Opsani to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. Opsani is creating the next generation of automated continuous deployment tools designed specifically for containers. How is continuous deployment different from continuous integration and continuous delivery? CI/CD tools provide build and test. Continuous Deployment is the means by which qualified changes in software code or architecture are automatically deployed to production as soon as they are ready. Adding continuous deployment to your toolchain is the final step to providing push button deployment for your developers.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations and optimization to employee training and insights, all ultimately create the best customer experience both online and in-store.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous architectural and coordination work to minimize the volatility of the cloud environment and leverage the security features of the cloud to the benefit of the CICD pipeline.