Welcome!

@DevOpsSummit Authors: Pat Romanski, Jason Bloomberg, John Rauser, Liz McMillan, Derek Weeks

Related Topics: @DevOpsSummit, Java IoT, @CloudExpo

@DevOpsSummit: Blog Post

Amazon Dash Embodies the Stackato Vision: Part 2 By @BernardGolden | @DevOpsSummit [#DevOps]

I reverse engineered the service to understand its architecture & components that would be required to execute that transaction

I recently wrote about how Amazon's new Dash service embodies the Stackato vision; in that post I discussed how Dash reflects the Business Agility portion of our vision. As the image to the left shows, Dash is a service that offers a small button; when pressed, magic happens and an order is placed for the product (in this example, Tide detergent), which is eventually delivered to the customer.

Dash presents a very different model of retailing, changing the purchase execution from a physical retail outlet or an online website to the very location of product consumption. With this service, Amazon boxes out its retail competitors by offering a more convenient and immediate transaction. I think Dash is a really interesting and innovative offering that foreshadows the enormous changes the Third Platform portends.

In this post, I'd like to discuss the technical underpinnings of the Dash offering and how the Dash architecture aligns almost perfectly with what we've created in Stackato.

I analyzed the Dash service to understand what it would take, from a technical perspective, to begin with someone pressing a small magnetized button and end with a product showing up on the button presser's doorstep. In effect, I reverse engineered the service to understand its architecture and the components that would be required to execute that transaction.

As you can see from Figure 1 below, the Dash architecture is comprised of four tiers, each of which has multiple components.

From left to right, the four tiers are:

  • Customer tier: this is where the purchase transaction occurs. The notion of a button push isn't quite accurate, as Dash also supports use of a wand that can scan or accept voice input (e.g., "Buy a roll of Saran Wrap"). In the future, the Dash service will be built into physical devices, which will trigger purchase actions when needed products run low. In the figure, these devices are represented by a washer/dryer; as the washer runs low on soap powder, the washer will order more. It doesn't take a genius to recognize the power of a product being automatically ordered without human intervention -- it practically makes the default product choice (e.g., Tide) a lock-in, with very low probability of the consumer purchasing another product. Dash also supports a mobile app that can be used to review orders and approve or remove specific items.
  • Event tier: this tier is where the Dash orders are captured by the service's back end. A number of steps take place to process each order event to ensure the Dash application operates correctly. More on this below.
  • Application tier: This tier executed the Dash logic - managing orders, triggering ecommerce transactions, and enabling partners to interact with the Dash system.
  • Ecommerce tier: This tier represents the Amazon retail offering, where orders are accepted, payments triggered, and products shipped.

Let's look at each tier in turn.

Customer Tier
In this tier one of several devices begins a purchase transaction via one of the Dash client devices - a button, wand, or embedded hardware device (e.g., a washer). Each transaction is captured in a packet of information that is quite small - probably a few hundred bytes; all it needs to contain is the Amazon product id and the purchase indicator flag.

However, the Dash wand also allows items to be ordered by voice, which complicates things in that a voice file must be captured and submitted (and, of course, ultimately undergo transcription and validation). This aspect of orders will be discussed in the Event Tier section, which follows.

In terms of how order events are communicated, I believe that the foundation of the device/application communication is the AWS Kinesis service, which is designed to be a very high-performance, scalable, real-time event processing service. Kinesis offers a client-side library that can be embedded into a device's firmware; this library formats event submission packets in Kinesis format.

For orders submitted via voice input on the wand device, the event would probably have a flag set to indicate that a recording of the order has been stored in an S3 object.

No matter what type of device submits an order, therefore, an event is submitted to Kinesis, which resides in the Event Tier, with an optional voice recording object accompanying certain order events stored in an S3 bucket. This portion of the Dash service is indicated by the number 1 in Figure 1.

However, the Client Tier offers more functionality than submitting orders for items. The Dash mobile app allows customers to review orders and remove items which are unwanted or perhaps were ordered by accident. If orders are reviewed in this manner, approved items flow back into the system to eventually go through the ecommerce process.

The Client Tier, while quite intriguing, is actually relatively straightforward. The buttons, wands, and appliances are all transaction devices that communicate single item orders which can be transmitted in small packets and submitted to the Dash Kinesis service.

Event Tier
By contrast, the Event Tier functionality is sophisticated and relatively complex; moreover, I believe it takes advantage of a number of innovative AWS services.

As previously stated, order events are submitted to the Dash Kinesis service. Kinesis is designed to consume vast numbers of events in real-time. It does not, however, do much of anything with those events (it stores them for 24 hours, then discards those that remain in the service).

Kinesis streams (as the event recipients are known) allow programs to be attached to the stream; these programs perform operations upon the events to allow them to be processed. AWS recommends that these programs be simple, with little actual processing; instead, the event information should be extracted and then passed along to another AWS service, which can operate upon the events in a less time-bound manner than the real-time constraints within Kinesis.

Common techniques associated with additional processing include placing the event information into an SQS queue or storing the information in DynamoDB.

I believe the latter technique is what the Event Tier does, as indicated by Number 2 in the Figure. As the program attached to the Kinesis stream receives each order event, it stores it in DynamoDB; for those order events which have voice files associated with them, the DynamoDB record contains a pointer to the S3 bucket that holds the voice file.

At the November AWS re:Invent conference, Amazon announced a new service called Lambda. Lambda allows code segments to be attached to certain AWS services, with the code segment executed when a state change occurs in the AWS service. One of the AWS services which can be so configured with Lambda code segments is DynamoDB.

In the Dash service, when an order event is extracted from Kinesis, it is inserted into DynamoDB. In turn, that state change of insertion calls a Lambda code segment, which extracts an associated voice file (if submitted via a Dash wand voice command) and then calls into the Dash Application Tier for order filtering and processing.

In addition to extracting the orders from DynamoDB and submitting them to the Application Tier, the order events are placed into an AWS EMR system to facilitate Dash analytics.

While this description seems simple, the reality is anything but. In fact, in my opinion. the Event Tier is the heart of the Dash service. Most people underestimate the complexity of dealing with events in real-time. In the case of Dash, this is easy to do. First, it must be understood that the offering as it stands today, is only the beginning. Dash clients can reside in any number of devices - buttons are only the start, as the client could be placed in any number of things, including product packaging, storage containers (e.g., pantries), and so on.

Moreover, the magnitude of the Dash service is easy to underestimate, as are all IoT applications. Eventually, Dash could be processing tens of millions of events each day. While the service certainly doesn't do anywhere near that volume now, Amazon had to architect the system so that it could accommodate that sort of future load.

It certainly helps that Amazon was able to rely on existing highly scalable services like DynamoDB and EMR (and, indeed, Kinesis). Nevertheless, it had to validate functionality and performance at levels that might only be reached in several years; undoubtedly, Amazon ran load and stress tests submitting huge loads to ensure acceptable performance in the future.

Dash Application Tier
The Application Tier is where the service's logical operations are performed. One of the key operations is to filter submitted events. It is possible that multiple orders could be accidentally submitted via a Dash client (e.g., by a delighted toddler fascinated by pressing a colorful Dash button); obviously, if someone were to receive 20 (or 200!) orders of Tide, that would reduce the value of the service and cause people to terminate its use. Since part of the motivation for rolling out the Dash service is to allow Amazon to develop new competitive mechanisms against companies like Walmart, anything that might reduce user satisfaction has to be avoided.

This is why the Dash service automatically rejects multiple orders. Moreover, it won't accept even a single order if another order is already in process with the product not yet delivered to the customer. Consequently, the Dash application has to filter events to remove duplicate or premature orders.

The simplest way to accomplish this is to allow the Dash client to submit multiple orders and remove them at the back end. It is likely that the Dash application accomplishes this by inspecting every event submitted and discarding inappropriate ones.

From a technical perspective, I expect that Amazon uses the new AWS Lambda service, which allows code fragments to be attached to certain AWS services and have that code executed when a state change occurs in the service. Dash most likely attaches Lambda event filtering code to the Dash DynamoDB storage system. Every event that is inserted into it by the Kinesis event service is inspected for validity and inappropriate Dash orders are removed from DynamoDB. This filtering process is represented by the number 3 in the Figure above.

Valid orders are left in DynamoDB for a period of time so that, should the customer wish to delete one, it is possible to do so via the Dash Mobile App, represented by the number 4 in the Figure.

Once an appropriate period of time has passed, making it likely that the customers actually did want to order the product, the Dash application Order Processing takes place, pulling transactions from the Dash DynamoDB and submitting them (number 7 in the Figure) to the Amazon ecommerce engine, which then executes standard order processing and shipment, eventually resulting in the customer receiving the product.

The Application Tier is also where the Dash analytics are leveraged to share information with Dash partners - the product companies whose goods are ordered via Dash. As my earlier piece discussed, the importance of these analytics should not be underestimated. Dash can provide Dash product partners with enormous insight about buyer behavior, which is tremendously valuable for their businesses. Knowing that products are, say, more frequently ordered mid-week rather than the weekend provides a lot of knowledge that can be used in product planning and promotion. To this point, Dash analytics have barely been mentioned, but you can bet that we'll hear a lot more about them in the future.

Ecommerce Tier
In the Figure, the ecommerce Tier is deceptively simple. All that resides there is Amazon. Make no mistake, the Amazon ecommerce capability is at once more complex and yet simpler in terms of the Dash application.

Amazon's ecommerce system is, perhaps, the most sophisticated online sales system extant. Amazon's ecommerce operation is a finely-tuned amalgam of supply chain coordination, software development and operation, retail purchasing and promotion, not to mention human resource management.

From the Dash perspective, however, that can all be a black box. The Dash application needs know nothing about any of those capabilities; all it needs to do is, once it recognizes that it is time to submit an order, is to call an API with a small amount of information - product, customer name, and, perhaps order date and time. Amazon ecommerce takes care of the rest.

Conclusion
Amazon Dash represents the future of Third Platform applications. It leverages extremely capable infrastructure and services and welds them together with application-specific functionality. For companies wishing to become Third Platform organizations, the message is clear:

  • Identify your opportunity,
  • Define the application to address it,
  • Design the application to leverage existing services that are available to reduce development effort and accelerate rollout,
  • Create a new offering that delivers distinctive differentiation compared to competitors in the market.

Our Stackato PaaS product is designed to address these situations perfectly - our product motto is "enabling the future of applications." Stackato is the perfect tool to implement the Dash-specific functionality - the systems that reside in the Dash Application tier.

No matter what kind of company you are, you must be on the lookout for opportunities to create new offerings that leverage the Third Platform. Doing nothing is not an option; worse, it's a recipe for obsolescence and failure.

In my next blog post, I'll discuss the Dash application and the lessons it holds for a company seeking to avoid the "obsolescence and failure" path. I'll talk about what steps you should take to create a Dash-like offering, and how you can ensure you stay on the right path and avoid straying into unsuccessful detours. In the meantime, if you haven't taken the time to learn about the Stackato vision, watch the video here.

The post Amazon Dash Embodies the Stackato Vision: Part 2 appeared first on ActiveState.

More Stories By Bernard Golden

Bernard Golden has vast experience working with CIOs to incorporate new IT technologies and meet their business goals. Prior to joining ActiveState, he was Senior Director, Cloud Computing Enterprise Solutions, for Dell Enstratius. Before joining Dell Enstratius, Bernard was CEO of HyperStratus, a Silicon Valley cloud computing consultancy that focuses on application security, system architecture and design, TCO analysis, and project implementation. He is also the Cloud Computing Advisor for CIO Magazine and was named a "Top 50 Cloud Computing Blog" by Sys-Con Media. Bernard's writings on cloud computing have been published by The New York Times and the Harvard Business Review and he is the author of Virtualization for Dummies, Amazon Web Services for Dummies and co-author of Creating the Infrastructure for Cloud Computing. Bernard has an MBA in Business and Finance from the University of California, Berkeley.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cloud marketplaces and DevOps are changing the economics of hosting and delivering software.
Wooed by the promise of faster innovation, lower TCO, and greater agility, businesses of every shape and size have embraced the cloud at every layer of the IT stack – from apps to file sharing to infrastructure. The typical organization currently uses more than a dozen sanctioned cloud apps and will shift more than half of all workloads to the cloud by 2018. Such cloud investments have delivered measurable benefits. But they’ve also resulted in some unintended side-effects: complexity and risk. End users now struggle to navigate multiple environments with varying degrees of performance. Companies are unclear on the security of their data and network access. And IT squads are overwhelmed trying to monitor and manage it all.
It is ironic, but perhaps not unexpected, that many organizations who want the benefits of using an Agile approach to deliver software use a waterfall approach to adopting Agile practices: they form plans, they set milestones, and they measure progress by how many teams they have engaged. Old habits die hard, but like most waterfall software projects, most waterfall-style Agile adoption efforts fail to produce the results desired. The problem is that to get the results they want, they have to change their culture and cultures are very hard to change. To paraphrase Peter Drucker, "culture eats Agile for breakfast." Successful approaches are opportunistic and leverage the power of self-organization to achieve lasting change.
"We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basically provide monitoring for them, structure for their cloud services and we fit into the DevOps world" explained David Gildeh, Co-founder and CEO of Outlyer, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, Doug Vanderweide, an instructor at Linux Academy, discussed why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers with heavy investments in serverless computing, when most of the industry has its eyes on Docker and containers.
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists examined how DevOps helps to meet the demands of Digital Transformation – including accelerating application delivery, closing feedback loops, enabling multi-channel delivery, empowering collaborative decisions, improving user experience, and ultimately meeting (and exceeding) business goals.
In the world of DevOps there are ‘known good practices’ – aka ‘patterns’ – and ‘known bad practices’ – aka ‘anti-patterns.' Many of these patterns and anti-patterns have been developed from real world experience, especially by the early adopters of DevOps theory; but many are more feasible in theory than in practice, especially for more recent entrants to the DevOps scene. In this power panel at @DevOpsSummit at 18th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists discussed the patterns and anti-patterns of DevOps, and what it means to ‘do the right thing’ in a DevOps way, but in the real world.
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists looked back at what DevOps has become, and forward at what it might create next.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
What's the role of an IT self-service portal when you get to continuous delivery and Infrastructure as Code? This general session showed how to create the continuous delivery culture and eight accelerators for leading the change. Don Demcsak is a DevOps and Cloud Native Modernization Principal for Dell EMC based out of New Jersey. He is a former, long time, Microsoft Most Valuable Professional, specializing in building and architecting Application Delivery Pipelines for hybrid legacy, and cloud native applications.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
SYS-CON Events announced today that Ayehu will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on October 31 - November 2, 2017 at the Santa Clara Convention Center in Santa Clara California. Ayehu provides IT Process Automation & Orchestration solutions for IT and Security professionals to identify and resolve critical incidents and enable rapid containment, eradication, and recovery from cyber security breaches. Ayehu provides customers greater control over IT infrastructure through automation. Ayehu solutions have been deployed by major enterprises worldwide, and currently, support thousands of IT processes across the globe. The company has offices in New York, California, and Israel.
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex software systems for startups and enterprises. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobile software company with over 200 developers, designers, quality assurance engineers, project managers in house, specializing in the world-class mobile and web development.
SYS-CON Events announced today that GrapeUp, the leading provider of rapid product development at the speed of business, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company, specialized in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the USA and Europe, we work with a variety of customers from emerging startups to Fortune 1000 companies.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
SYS-CON Events announced today that Enzu will exhibit at SYS-CON's 21st Int\ernational Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their online business and let Enzu manage their IT hosting infrastructure.
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo Silicon Valley which will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is at the intersection of technology and business-optimizing tools, organizations and processes to bring measurable improvements in productivity and profitability," said Aruna Ravichandran, vice president, DevOps product and solutions marketing, CA Technologies. "It's this results-driven combination of technology and business that makes me so passionate about DevOps and its future in the industry. I am truly honored to take on this co-chair role, and look forward to working with the DevOps Summit team at Cloud Expo and attendees to advance DevOps."
SYS-CON Events announced today that Cloud Academy named "Bronze Sponsor" of 21st International Cloud Expo which will take place October 31 - November 2, 2017 at the Santa Clara Convention Center in Santa Clara, CA. Cloud Academy is the industry’s most innovative, vendor-neutral cloud technology training platform. Cloud Academy provides continuous learning solutions for individuals and enterprise teams for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most popular cloud computing technologies. Get certified, manage the full lifecycle of your cloud-based resources, and build your knowledge based using Cloud Academy’s expert-created content, comprehensive Learning Paths, and innovative Hands-on Labs.
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business - from apparel to energy - is being rewritten by software. From planning to development to management to security, CA creates software that fuels transformation for companies in the application economy. With CA software at the center of their IT strategy, organizations can leverage the technology that changes the way we live - from the data center to the mobile device. CA's software and solutions help customers thrive in the new application economy by delivering the means to deploy, monitor and secure their applications and infrastructure.
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business - from apparel to energy - is being rewritten by software. From planning to development to management to security, CA creates software that fuels transformation for companies in the application economy. With CA software at the center of their IT strategy, organizations can leverage the technology that changes the way we live - from the data center to the mobile device. CA's software and solutions help customers thrive in the new application economy by delivering the means to deploy, monitor and secure their applications and infrastructure.