Welcome!

@DevOpsSummit Authors: Liz McMillan, Pat Romanski, Elizabeth White, Yeshim Deniz, Zakia Bouachraoui

Related Topics: @ThingsExpo, Machine Learning , Artificial Intelligence

@ThingsExpo: Article

How Is Apple Using Machine Learning? | @ThingsExpo #AI #ML #DL #DX #IoT

Today, machine learning is found in almost every product and service by Apple

Today, machine learning is found in almost every product and service by Apple. They use deep learning to extend battery life between charges on their devices and detect fraud on the Apple store, recognize the locations and faces in your photos, and help Apple choose news stories for you.

The concept of AI (Artificial Intelligence) has been the subject of many discussions lately. According to some predictions, AI will have the ability to learn by itself, outclassing the capabilities of the human brain, and even manage to fight for equal rights by the year 2100. Even though these are (still) just speculations and predictions, companies like Apple are developing and implementing machine learning technology, which is still in its infancy. How is Apple using machine learning?

Apple's beginnings with deep learning technologies
Let's start with Apple's beginnings with using AI. It was during the 1990s, when the company was using certain machine learning techniques in its products with handwriting recognition. This machine learning techniques were, of course, much more primitive.

Today, machine learning is found in almost every product and service by Apple. They use deep learning to extend battery life between charges on their devices and detect fraud on the Apple store, recognize the locations and faces in your photos, and help Apple choose news stories for you. Machine learning determines whether the owners of Apple Watch cloud are really exercising or just perambulating. It figures out whether you'd be better off switching to the cell network due to a weak Wi-Fi signal.

Apple's smart assistant
In 2011, Apple integrated a smart assistant into its operating system, and was the first tech giant to pull it off. The name of that smart assistant is Siri, and it was an adaptation of a standalone app that Apple had purchased (along with the app's developing team). Siri had ‘exploded', with ecstatic initial reviews. However, over the next few years, users wanted to see Apple deal with Siri's shortcomings. Thus, Siri got a ‘brain transplant' in 2014.

Siri's voice recognition was moved to a neural-net based system. The system began leveraging machine learning techniques, including DNN (deep neural networks), long short-term memory units, convolutional neural networks, n-grams, and gate recurrent units. Siri was operational with deep learning, while it still looked the same.

Every iPhone user has come across Apple's AI, for example, when you swipe on your device screen to get a shortlist of all the apps that you're most likely to open next, or when it identifies a caller who's not memorized in your contact list. Whenever a map location pops out for the accommodation you've reserved, or when you get reminded of an appointment that you forgot to put into your calendar. Apple's neural-network trained system watches as you type, detecting items and key events like appointments, contacts, and flight information. The information is not collected by the company, but stays on your iPhone and in cloud-based storage backups - the information is filtered so it can't be inferred. All this is made possible by Apple's adoption of neural nets and deep learning.

During this year's WWDC, Apple presented how machine learning is used by a new Siri-powered watch face to customize its content in real-time, including news, traffic information, reminders, upcoming meetings, etc., when they are supposed to be most relevant.

Making mobile AI faster with new machine learning API
Apple wants to make the AI on your iPhone as powerful and fast as possible. A week ago, the company unveiled a new machine learning API, named Core ML. The most important benefit of Core ML will be faster responsiveness of the AI when executing on the Apple Watch, iPad, and iPhone. What would this cover? Well, everything from face recognition to text analysis, with an effect of a wide range of apps.

The essential machine learning tools that the new Core ML will support include neural networks (deep, convolutional, and recurrent), tree ensembles, and linear models. As for privacy, the data that's used for improving user experience won't leave the users' tablets and phones.

The announcement of making AI work better on mobile devices became an industry-wide trend, meaning that other companies might be trying that as well. As for Apple, it's clear that deep learning technology has changed their products. However, it's not clear whether it's changing the company itself. Apple carefully controls the user experience, with everything being precisely coded and pre-designed. However, engineers must take a step back (when using machine learning) and let the software discover solutions by itself. Will machine learning systems have a hand in product design, if Apple manages to adjust to the modern reality?

More Stories By Nate Vickery

Nate M. Vickery is a business consultant from Sydney, Australia. He has a degree in marketing and almost a decade of experience in company management through latest technology trends. Nate is also the editor-in-chief at bizzmarkblog.com.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and GM, discussed how clients in this new era of innovation can apply data, technology, plus human ingenuity to springboard to advance new business value and opportunities.
DXWorldEXPO LLC announced today that "IoT Now" was named media sponsor of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to the new world.