Welcome!

@DevOpsSummit Authors: Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz, Aruna Ravichandran

Related Topics: @DevOpsSummit, Java IoT, @CloudExpo

@DevOpsSummit: Blog Post

Amazon Dash Embodies the Stackato Vision: Part 2 By @BernardGolden | @DevOpsSummit [#DevOps]

I reverse engineered the service to understand its architecture & components that would be required to execute that transaction

I recently wrote about how Amazon's new Dash service embodies the Stackato vision; in that post I discussed how Dash reflects the Business Agility portion of our vision. As the image to the left shows, Dash is a service that offers a small button; when pressed, magic happens and an order is placed for the product (in this example, Tide detergent), which is eventually delivered to the customer.

Dash presents a very different model of retailing, changing the purchase execution from a physical retail outlet or an online website to the very location of product consumption. With this service, Amazon boxes out its retail competitors by offering a more convenient and immediate transaction. I think Dash is a really interesting and innovative offering that foreshadows the enormous changes the Third Platform portends.

In this post, I'd like to discuss the technical underpinnings of the Dash offering and how the Dash architecture aligns almost perfectly with what we've created in Stackato.

I analyzed the Dash service to understand what it would take, from a technical perspective, to begin with someone pressing a small magnetized button and end with a product showing up on the button presser's doorstep. In effect, I reverse engineered the service to understand its architecture and the components that would be required to execute that transaction.

As you can see from Figure 1 below, the Dash architecture is comprised of four tiers, each of which has multiple components.

From left to right, the four tiers are:

  • Customer tier: this is where the purchase transaction occurs. The notion of a button push isn't quite accurate, as Dash also supports use of a wand that can scan or accept voice input (e.g., "Buy a roll of Saran Wrap"). In the future, the Dash service will be built into physical devices, which will trigger purchase actions when needed products run low. In the figure, these devices are represented by a washer/dryer; as the washer runs low on soap powder, the washer will order more. It doesn't take a genius to recognize the power of a product being automatically ordered without human intervention -- it practically makes the default product choice (e.g., Tide) a lock-in, with very low probability of the consumer purchasing another product. Dash also supports a mobile app that can be used to review orders and approve or remove specific items.
  • Event tier: this tier is where the Dash orders are captured by the service's back end. A number of steps take place to process each order event to ensure the Dash application operates correctly. More on this below.
  • Application tier: This tier executed the Dash logic - managing orders, triggering ecommerce transactions, and enabling partners to interact with the Dash system.
  • Ecommerce tier: This tier represents the Amazon retail offering, where orders are accepted, payments triggered, and products shipped.

Let's look at each tier in turn.

Customer Tier
In this tier one of several devices begins a purchase transaction via one of the Dash client devices - a button, wand, or embedded hardware device (e.g., a washer). Each transaction is captured in a packet of information that is quite small - probably a few hundred bytes; all it needs to contain is the Amazon product id and the purchase indicator flag.

However, the Dash wand also allows items to be ordered by voice, which complicates things in that a voice file must be captured and submitted (and, of course, ultimately undergo transcription and validation). This aspect of orders will be discussed in the Event Tier section, which follows.

In terms of how order events are communicated, I believe that the foundation of the device/application communication is the AWS Kinesis service, which is designed to be a very high-performance, scalable, real-time event processing service. Kinesis offers a client-side library that can be embedded into a device's firmware; this library formats event submission packets in Kinesis format.

For orders submitted via voice input on the wand device, the event would probably have a flag set to indicate that a recording of the order has been stored in an S3 object.

No matter what type of device submits an order, therefore, an event is submitted to Kinesis, which resides in the Event Tier, with an optional voice recording object accompanying certain order events stored in an S3 bucket. This portion of the Dash service is indicated by the number 1 in Figure 1.

However, the Client Tier offers more functionality than submitting orders for items. The Dash mobile app allows customers to review orders and remove items which are unwanted or perhaps were ordered by accident. If orders are reviewed in this manner, approved items flow back into the system to eventually go through the ecommerce process.

The Client Tier, while quite intriguing, is actually relatively straightforward. The buttons, wands, and appliances are all transaction devices that communicate single item orders which can be transmitted in small packets and submitted to the Dash Kinesis service.

Event Tier
By contrast, the Event Tier functionality is sophisticated and relatively complex; moreover, I believe it takes advantage of a number of innovative AWS services.

As previously stated, order events are submitted to the Dash Kinesis service. Kinesis is designed to consume vast numbers of events in real-time. It does not, however, do much of anything with those events (it stores them for 24 hours, then discards those that remain in the service).

Kinesis streams (as the event recipients are known) allow programs to be attached to the stream; these programs perform operations upon the events to allow them to be processed. AWS recommends that these programs be simple, with little actual processing; instead, the event information should be extracted and then passed along to another AWS service, which can operate upon the events in a less time-bound manner than the real-time constraints within Kinesis.

Common techniques associated with additional processing include placing the event information into an SQS queue or storing the information in DynamoDB.

I believe the latter technique is what the Event Tier does, as indicated by Number 2 in the Figure. As the program attached to the Kinesis stream receives each order event, it stores it in DynamoDB; for those order events which have voice files associated with them, the DynamoDB record contains a pointer to the S3 bucket that holds the voice file.

At the November AWS re:Invent conference, Amazon announced a new service called Lambda. Lambda allows code segments to be attached to certain AWS services, with the code segment executed when a state change occurs in the AWS service. One of the AWS services which can be so configured with Lambda code segments is DynamoDB.

In the Dash service, when an order event is extracted from Kinesis, it is inserted into DynamoDB. In turn, that state change of insertion calls a Lambda code segment, which extracts an associated voice file (if submitted via a Dash wand voice command) and then calls into the Dash Application Tier for order filtering and processing.

In addition to extracting the orders from DynamoDB and submitting them to the Application Tier, the order events are placed into an AWS EMR system to facilitate Dash analytics.

While this description seems simple, the reality is anything but. In fact, in my opinion. the Event Tier is the heart of the Dash service. Most people underestimate the complexity of dealing with events in real-time. In the case of Dash, this is easy to do. First, it must be understood that the offering as it stands today, is only the beginning. Dash clients can reside in any number of devices - buttons are only the start, as the client could be placed in any number of things, including product packaging, storage containers (e.g., pantries), and so on.

Moreover, the magnitude of the Dash service is easy to underestimate, as are all IoT applications. Eventually, Dash could be processing tens of millions of events each day. While the service certainly doesn't do anywhere near that volume now, Amazon had to architect the system so that it could accommodate that sort of future load.

It certainly helps that Amazon was able to rely on existing highly scalable services like DynamoDB and EMR (and, indeed, Kinesis). Nevertheless, it had to validate functionality and performance at levels that might only be reached in several years; undoubtedly, Amazon ran load and stress tests submitting huge loads to ensure acceptable performance in the future.

Dash Application Tier
The Application Tier is where the service's logical operations are performed. One of the key operations is to filter submitted events. It is possible that multiple orders could be accidentally submitted via a Dash client (e.g., by a delighted toddler fascinated by pressing a colorful Dash button); obviously, if someone were to receive 20 (or 200!) orders of Tide, that would reduce the value of the service and cause people to terminate its use. Since part of the motivation for rolling out the Dash service is to allow Amazon to develop new competitive mechanisms against companies like Walmart, anything that might reduce user satisfaction has to be avoided.

This is why the Dash service automatically rejects multiple orders. Moreover, it won't accept even a single order if another order is already in process with the product not yet delivered to the customer. Consequently, the Dash application has to filter events to remove duplicate or premature orders.

The simplest way to accomplish this is to allow the Dash client to submit multiple orders and remove them at the back end. It is likely that the Dash application accomplishes this by inspecting every event submitted and discarding inappropriate ones.

From a technical perspective, I expect that Amazon uses the new AWS Lambda service, which allows code fragments to be attached to certain AWS services and have that code executed when a state change occurs in the service. Dash most likely attaches Lambda event filtering code to the Dash DynamoDB storage system. Every event that is inserted into it by the Kinesis event service is inspected for validity and inappropriate Dash orders are removed from DynamoDB. This filtering process is represented by the number 3 in the Figure above.

Valid orders are left in DynamoDB for a period of time so that, should the customer wish to delete one, it is possible to do so via the Dash Mobile App, represented by the number 4 in the Figure.

Once an appropriate period of time has passed, making it likely that the customers actually did want to order the product, the Dash application Order Processing takes place, pulling transactions from the Dash DynamoDB and submitting them (number 7 in the Figure) to the Amazon ecommerce engine, which then executes standard order processing and shipment, eventually resulting in the customer receiving the product.

The Application Tier is also where the Dash analytics are leveraged to share information with Dash partners - the product companies whose goods are ordered via Dash. As my earlier piece discussed, the importance of these analytics should not be underestimated. Dash can provide Dash product partners with enormous insight about buyer behavior, which is tremendously valuable for their businesses. Knowing that products are, say, more frequently ordered mid-week rather than the weekend provides a lot of knowledge that can be used in product planning and promotion. To this point, Dash analytics have barely been mentioned, but you can bet that we'll hear a lot more about them in the future.

Ecommerce Tier
In the Figure, the ecommerce Tier is deceptively simple. All that resides there is Amazon. Make no mistake, the Amazon ecommerce capability is at once more complex and yet simpler in terms of the Dash application.

Amazon's ecommerce system is, perhaps, the most sophisticated online sales system extant. Amazon's ecommerce operation is a finely-tuned amalgam of supply chain coordination, software development and operation, retail purchasing and promotion, not to mention human resource management.

From the Dash perspective, however, that can all be a black box. The Dash application needs know nothing about any of those capabilities; all it needs to do is, once it recognizes that it is time to submit an order, is to call an API with a small amount of information - product, customer name, and, perhaps order date and time. Amazon ecommerce takes care of the rest.

Conclusion
Amazon Dash represents the future of Third Platform applications. It leverages extremely capable infrastructure and services and welds them together with application-specific functionality. For companies wishing to become Third Platform organizations, the message is clear:

  • Identify your opportunity,
  • Define the application to address it,
  • Design the application to leverage existing services that are available to reduce development effort and accelerate rollout,
  • Create a new offering that delivers distinctive differentiation compared to competitors in the market.

Our Stackato PaaS product is designed to address these situations perfectly - our product motto is "enabling the future of applications." Stackato is the perfect tool to implement the Dash-specific functionality - the systems that reside in the Dash Application tier.

No matter what kind of company you are, you must be on the lookout for opportunities to create new offerings that leverage the Third Platform. Doing nothing is not an option; worse, it's a recipe for obsolescence and failure.

In my next blog post, I'll discuss the Dash application and the lessons it holds for a company seeking to avoid the "obsolescence and failure" path. I'll talk about what steps you should take to create a Dash-like offering, and how you can ensure you stay on the right path and avoid straying into unsuccessful detours. In the meantime, if you haven't taken the time to learn about the Stackato vision, watch the video here.

The post Amazon Dash Embodies the Stackato Vision: Part 2 appeared first on ActiveState.

More Stories By Bernard Golden

Bernard Golden has vast experience working with CIOs to incorporate new IT technologies and meet their business goals. Prior to joining ActiveState, he was Senior Director, Cloud Computing Enterprise Solutions, for Dell Enstratius. Before joining Dell Enstratius, Bernard was CEO of HyperStratus, a Silicon Valley cloud computing consultancy that focuses on application security, system architecture and design, TCO analysis, and project implementation. He is also the Cloud Computing Advisor for CIO Magazine and was named a "Top 50 Cloud Computing Blog" by Sys-Con Media. Bernard's writings on cloud computing have been published by The New York Times and the Harvard Business Review and he is the author of Virtualization for Dummies, Amazon Web Services for Dummies and co-author of Creating the Infrastructure for Cloud Computing. Bernard has an MBA in Business and Finance from the University of California, Berkeley.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with legacy components, and also go over automated capabilities provided by operators to auto-update Kubernetes with zero downtime for current and secure deployments.
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed. Sometimes in order to reduce complexity teams compromise features or change requirements
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual business failure.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we’ve lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the ...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering management. To date, IBM has launched more than 50 cloud data centers that span the globe. He has been building advanced technology, delivering “as a service” solutions, and managing infrastructure services for the past 20 years.
SYS-CON Events announced today that Ryobi Systems will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ryobi Systems Co., Ltd., as an information service company, specialized in business support for local governments and medical industry. We are challenging to achive the precision farming with AI. For more information, visit http://www.ryobi-sol.co.jp/en/.
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: implement always-vigilant DNS security
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using new techniques such as feature flagging, rollouts, and traffic splitting, experimentation is no longer just the future for marketing teams, it’s quickly becoming an essential practice for high-performing development teams as well.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and business benefits that cloud bursting provides, including increased compute capacity, lower IT investment, financial agility, and, ultimately, faster time-to-market.
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we've helped public, private and nonprofit organizations implement technology solutions that speed and simplify their operations. As one of the fastest growing IT solution providers in the country, we have gained a reputation for effortless implementations with relentless follow-through and enduring support.
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating systems, programming languages and microprocessors. Their elite team has collectively earned dozens of patents, three film credits and grown record setting businesses. And collectively, they've shipped more than 2 billion licensed products. They are difference makers who have a reputation for delivering innovative products and accomplishing what many others don't believe is even possible. They are ...
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their associated CPUs, memory storage and network) into one or more large servers capable of handling the biggest Big Data problems and most unpredictable workloads.
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching of virtual storage services to its enterprise market.
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant that knows everything and can respond to your emotions and verbal commands!
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo Silicon Valley which will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is at the intersection of technology and business-optimizing tools, organizations and processes to bring measurable improvements in productivity and profitability," said Aruna Ravichandran, vice president, DevOps product and solutions marketing, CA Technologies. "It's this results-driven combination of technology and business that makes me so passionate about DevOps and its future in the industry. I am truly honored to take on this co-chair role, and look forward to working with the DevOps Summit team at Cloud Expo and attendees to advance DevOps."
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data and analytics insights onto a single, holistic, display, focusing attention on what matters, when it matters.