Welcome!

@DevOpsSummit Authors: Pat Romanski, Liz McMillan, Elizabeth White, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Artifact Repository in Continuous Delivery | @DevOpsSummit #DevOps #APM

We explain why tools like Maven uses an artifact repository and so should anyone designing a continuous delivery process

Why You Need an Artifact Repository for Continuous Delivery
By Ron Gidron

Both prospects and customers often ask me why we need an artifact repository. Some think that as their favorite CI (Continuous Integration) server such as Jenkins already stores the output of each build, maybe there's no need to add an artifact repository to their existing tool chain. Others simply wonder why they need such a repository at all.

In this blog post I'll discuss why it's essential for any continuous delivery and deployment project to version everything, and why artifact repositories such as Artifactory or Nexus are great choices for managing binary and other artifacts.

What repositories do
Let's start with some basics: Artifact repositories manage collections of artifacts (binaries or any type of files really) and metadata in a defined directory structure. They are typically used by software build tools such as Maven (in the Java world) as sources for retrieving and storing needed artifacts. But there is really no limit to what you can store in an artifact repository. Some examples:

  • Any type of binary
  • Source archives
  • Flash archives
  • Documentation bundles

Why use a repository?
Artifact repositories are great at managing multilevel dependencies, much better then the old text file with a list that developers update and maintain. This dependency management is critical for reducing errors and ensuring the right pieces make it with each build/deployment/release, especially in large-scale business applications.

Repositories also support the notion of snapshot and release versions, where snapshots are intermediate versions of said artifact (usually marked with a data and timestamp attached to the version number) and release versions are those that are marked for "official" release. Metadata that describes each artifact and its dependencies is great for governance and security.

How repositories work
Artifact repositories use a standard addressing mechanism for accessing artifacts, which really simplifies automation. It also assists the parameterization of searching and retrieving versioned artifacts from these repositories, often using a REST call with a URL translation for the directory structure...OK, now I'm geeking out more than is necessary for this blog entry!

Basically, if you're in the process of designing a continuous delivery and automated deployment process for either an application, a department or even your entire IT landscape, we highly recommend you take a look at an artifact repository and make sure to version everything.

More Stories By Automic Blog

Automic, a leader in business automation, helps enterprises drive competitive advantage by automating their IT factory - from on-premise to the Cloud, Big Data and the Internet of Things.

With offices across North America, Europe and Asia-Pacific, Automic powers over 2,600 customers including Bosch, PSA, BT, Carphone Warehouse, Deutsche Post, Societe Generale, TUI and Swisscom. The company is privately held by EQT. More information can be found at www.automic.com.

@DevOpsSummit Stories
"Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant busin...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be looking at some significant engineering investment. On-demand, serverless computing enables developers to try out a fleet of devices on IoT gateways with ease. With a sensor simulator built on top of AWS Lambda, it’s possible to elastically generate device sensors that report their state to the cloud.