Welcome!

@DevOpsSummit Authors: Derek Weeks, Karthick Viswanathan, Gopala Krishna Behara, Sridhar Chalasani, Tirumala Khandrika

Related Topics: @CloudExpo, Containers Expo Blog, @DXWorldExpo

@CloudExpo: Blog Post

Case Study: Accelerate - Academic Research | @CloudExpo @DDN_limitless #Cloud #Storage

UCL transforms research collaboration and data preservation with scalable cloud object storage appliance from DDN

University College London (UCL), ranked consistently as one of the top five universities in the world, is London's leading multidisciplinary university with more than 10,000 staff , over 26,000 students as well as more than 100 departments, institutes and research centers. With 25 Nobel Prize winners and three Fields medalists among UCL's alumni and staff, the university has attained a world-class reputation for the quality of its teaching and research across the academic spectrum.

As London's premier research institution, UCL has 5,000 researchers committed to applying their collective strengths, insights and creativity to overcome problems of global significance. The university's innovative, cross-disciplinary research agenda is designed to deliver immediate, medium and long-term benefits to humanity. UCL Grand Challenges, which encompass Global Health, Sustainable Cities, Intercultural Interaction and Human Wellbeing, are a central feature of the university's research strategy.

According to Dr. J. Max Wilkinson, Head of Research Data Services for the UCL Information Services Division, sharing and preserving project-based research results is essential to the scientific method. "I was brought in to provide researchers with a safe and resilient solution for storing, sharing, reusing and preserving project-based data," he explains. "Our goal is to remove the burden of managing project data from individual researchers while making it more available over longer periods of time."

The Challenge
The opportunity to improve the sharing and access of project-based research presented several unique technical and cultural challenges. On the technical side, the team had to accommodate a variety of different types of data, growing in volume and velocity. In some cases, a small amount off data is so valuable to a research team that six discrete copies were retained on separate USB drives or removable hard drives kept in different locations. In other instances, UCL researchers produce copious amounts of very well-defined data that pass between compute algorithms under which research sits.

In addition to solving technical problems, the research data services team was faced with the opportunity to support researchers in a new ‘data-intensive' world by making it safe and easy to follow best practices in data management and use best-of-class storage solutions. "We discovered the valuable data underpinning most research projects were stuck on a hard drives or disk, never to be seen again," adds Wilkinson. "If we could provide a framework over which people could share and preserve data confidently, we could minimize this behavior and improve research by making the scholarly rerecord more complete."

To accomplish this, UCCLL needed to provide an enterprise-class foundation for data manipulation that met the needs of its diverse user community. While some researchers thought 100GB was a large amount of data, others clamored for more than 100TB to support a particular project. There was also an expectation that up to 3,000 individuals from UCL's total base of 5,000 active researchers and collaborators would require services within the next 18-to-24 months.

"We had a simple services proposition that would eliminate the need for research teams to manage racks of servers and data storage devices," says Wilkinson. "Of course, this meant we'd need a highly scalable storage infrastructure that could grow to 100PB without creating a large storage footprint or excessive administrative overhead."

Additionally, they had to address long-term data retention needs that extended well beyond the realm of research projects. UCL, along with many other UK research intensive institutes, is faced with increasingly stringent requirements for the management of project data outputs by UK Research Councils and other funding bodies in the United Kingdom. As grant funding in the UK supports best practice, it was critical to have a proven data management plan that documented how UCL would preserve data for sometimes decades while ensuring maximum appropriate access and reuse by third parties.

The Solution
In seeking a scalable, resilient storage foundation, UCL issued an RFP to solicit insight on different approaches for consolidating the university's research data storage infrastructure. Each of the 21 RFP respondents was asked to provide examples of large-scale deployments, which produced far-ranging answers, including how providers addressed sheer data volume, reduced increasingly complex environments or delivered overarching data management frameworks.

UCL's RFP covered a diverse set of requirements to determine each potential solution provider's respective strengths and limitations. "We asked for more than we thought possible from a single vendor-from a synchronous file sharing to a high performance parallel file system, highly scalable, resilient storage that would be simple to manage," notes Daniel Hanlon, Storage Architect for Research Data Services at University College London. "We wanted to cover our bases while determining what was practical and doable for researchers."

Recommendations encompassed a broad storage spectrum, including NAS, SAN, HSM, object storage, asset management solutions and small amounts of spinning disks with lots of back-end tape. "Because we had such broad requirements, we omitted any vendor that was bound to a particular hardware platform," explains Wilkinson. "It was important to be both data and storage agnostic so we would have the flexibility to support all data and media types without being locked into any particular hardware platform."

With its ability to support virtually unlimited scalability, object storage appealed to UCL, especially since it also would be much easier to manage than alternatives. Still, object storage was seen as a relatively new technology and UCL lacked hands-on experience with large-scale deployments within the university's ecosystem. In addition to evaluating the different technologies, UCL also assessed each provider's understanding of their environment, as it was critically important to accommodate UCL's researcher requirements in order to drive acceptance. "Some of the RFP respondents didn't understand the difference between the corporate and academic worlds, and the fact that universities by nature generally have to avoid being tied into particular closed technologies," adds Hanlon. "Many of the RFP respondents were eliminated, not because of their technical response, but because they didn't really get what we were trying to do."

As a result, the universe of prospective solutions was reduced to a half-dozen recommendations. As the team took a closer look at the finalists, they considered each vendor's academic track record, ability to scale without overburdening administrators and experience with open-source technology. "We wanted to work with a storage solutions provider that took advantage of open-source solutions," Hanlon notes. "This would enable us to partner with them and also with other academic institutions trying to do similar things."

In the final analysis, UCL wanted a partner with equal enthusiasm for freeing researchers from the burden of data storage so they could maximize the impact of their projects. "We were very interested in building a relationship with a strong storage partner to fill our technology gap," says Wilkinson. "After a thorough assessment, DataDirectTM Networks (DDN) met our technical requirements and shared our data storage vision. In evaluating DDN, we agreed that their solution had a simple proposition, high performance and low administration overhead."

The proposed solution, which included the GRIDScaler massively scalable parallel file system and Web Object Scaler (WOS), also provided the desired scalability and management simplicity. Another plus for WOS storage was its tight integration with the Integrated Rule-Oriented Data Management Solution (iRODS). This open-source solution is ideally suited for research collaboration by making it easier to organize, share and find collections of data stored in local and remote repositories.

"It was important that DDN's solution gave us multiple ways to access the same storage, so we could be compatible with existing application codes," says Hanlon. "The tendency with other solutions was to give us bits of technology that had been developed in different spaces and that didn't really fit our problem."

The Benefits
During a successful pilot implementation involving a half-petabyte of storage, UCL gained first-hand insight into the advantage of DDN's turnkey distributed storage and collaboration solution. "The main attraction of DDN WOS is the combination of an efficient object store with edge appliances to ease integration with other storage infrastructure," says Hanlon. Another big plus for UCL is DDN's high-density storage capacity, which will enable fitting a lot more disks into existing storage racks, which is crucial to growing while maintaining a small footprint in UCL's highly-congested, expensive downtown London location.

As researchers are often reluctant to give up control of their data storage solutions, the team also has been pleased to discover early adopters who see the value of using the new service to protect and preserve current data assets. In fact, the new research data service already is getting high marks for performance reliability, data durability, data backup and disaster recovery capabilities.

UCL predicts that as traction for the new service increases, there will be greater interest in leveraging it to further extend how current research is reused and exploited to drive more impactful outcomes. By taking this innovative approach, the UCL Research Data Services team is embracing the open data movement while enlisting leading-edge technologies to deliver reliable, flexible data access that maximizes appropriate sharing and re-use of research data.

Additionally, UCL is taking the researcher worry of meeting increasingly strong expectations from funding organizations out of the storage equation with its plans to add a scalable archive to its dynamic storage service offering. "We'll be able to tell researchers that if they use our services, they'll be compliant with UCL, UK Research Council and other UK and international funding bodies' policies and requirements," Wilkinson says. "They won't have to worry about it because we will."

By providing a framework over which UCL researchers can store and share data confidently, UCL expects to achieve significant bottom-line cost savings. Early projections around the initial phase of the infrastructure build out are upwards of hundreds of thousands of UK pounds, simply by eliminating the need for thousands of researchers to attain and maintain their own storage hardware. "DDN is empowering us to deliver performance and cost savings through a dramatically simplified approach; in doing so we support UCL researchers, their collaborators and partners to maintain first class research at London's global university," concludes Wilkinson. "Add in the fact that DDN's resilient, extensible storage solution provided evidence of seamless expansion from a half-petabyte to 100PBs, and we found exactly the foundation we were looking for."

More Stories By Pat Romanski

News Desk compiles and publishes breaking news stories, press releases and latest news articles as they happen.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering management. To date, IBM has launched more than 50 cloud data centers that span the globe. He has been building advanced technology, delivering “as a service” solutions, and managing infrastructure services for the past 20 years.
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we’ve lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the ...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex to learn. This is because Kubernetes is more of a toolset than a ready solution. Hence it’s essential to know when and how to apply the appropriate Kubernetes constructs.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 20% faster time-to-market, 22% improvement in quality, and 18% reduction in dev and ops costs, according to research firm Vanson-Bourne. It is changing the way IT works, how businesses interact with customers, and how organizations are buying, building, and delivering software.
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud.
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on quality and value. Based in Bellevue, Washington, T-Mobile US provides services through its subsidiaries and operates its flagship brands, T-Mobile and MetroPCS. For more information, visit https://www.t-mobile.com.
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness. For more information, please visit https://www.cedexis.com.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In their session at 21st Cloud Expo, Jenny Hung, E2E Engineer Manager at Yahoo Gemini, Haoran Zhao, Software Engineer at Oath Gemini, and Lin Zhang, Software Engineer at Oath (Yahoo), will describe the technical challenges and the principles we followed to build a reliable and scalable test automation infrastructure across desktops, mobile apps, and mobile web platforms on the cloud. We also share some...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lavi, a Nutanix DevOps Solution Architect, explored the ways that Nutanix technologies empower teams to react faster than ever before and connect teams in ways that were either too complex or simply impossible with traditional infrastructures.
SYS-CON Events announced today that Vivint to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. As a leading smart home technology provider, Vivint offers home security, energy management, home automation, local cloud storage, and high-speed Internet solutions to more than one million customers throughout the United States and Canada. The end result is a smart home solution that saves you time and money and ultimately simplifies your life.
SYS-CON Events announced today that Opsani will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Opsani is the leading provider of deployment automation systems for running and scaling traditional enterprise applications on container infrastructure.
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.
SYS-CON Events announced today that Nirmata will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nirmata provides a comprehensive platform, for deploying, operating, and optimizing containerized applications across clouds, powered by Kubernetes. Nirmata empowers enterprise DevOps teams by fully automating the complex operations and management of application containers and its underlying resources. Nirmata not only simplifies deployment and management of Kubernetes clusters but also facilitates delivery and operations of applications by continuously monitoring the application and infrastructure for changes, and auto-tuning the application based on pre-defined policies. Using Nirmata, enterprises can accelerate their journey towards becoming cloud-native.
SYS-CON Events announced today that Opsani to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. Opsani is creating the next generation of automated continuous deployment tools designed specifically for containers. How is continuous deployment different from continuous integration and continuous delivery? CI/CD tools provide build and test. Continuous Deployment is the means by which qualified changes in software code or architecture are automatically deployed to production as soon as they are ready. Adding continuous deployment to your toolchain is the final step to providing push button deployment for your developers.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations and optimization to employee training and insights, all ultimately create the best customer experience both online and in-store.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous architectural and coordination work to minimize the volatility of the cloud environment and leverage the security features of the cloud to the benefit of the CICD pipeline.