|By Skytap Blog||
|April 10, 2014 11:19 AM EDT||
Noel Wurst: Hello, this is Noel Wurst with Skytap and I am speaking today with Peter Coffee. Peter is the VP for Strategic Resource at salesforce.com and he's also going to be taking part in the SDLC Acceleration Summit on May 13th in San Francisco, California. Peter how are you doing today?
Peter Coffee: It's great to with you Noel. I'm fine, thank you.
Noel Wurst: Great, awesome. Well, I was going to say that I had the privilege of reading your bio online. I recommend listeners go online and check it out for themselves. It's such a varied background, you've done a little bit of everything. I saw everything from aerospace, the defense sector, alternative fuel research, video game development, and I was curious as to what you're working on now with Salesforce or outside of there. What's piquing your interest these days?
Peter Coffee: May job was created seven years ago at salesforce.com, three years after I had a breakfast conversation with Marc Benioff about what it would take to let people build things on the web; We didn't call it the cloud seven years ago. as readily as some of the tools like Hypercard or Visual Basic or letting them build things on desktops and get rid of idea that you had to build the app with all of the same complexity and difficulties and then have addition compounded difficulties putting it up online. But instead, be able to get to something that allowed you to build it as readily as if you were just going to run it on your PC, and then make it securely and reliably available to people anywhere. That was the vision for what the next generation platform needed to be. And if you think about it, the web went from being a medium where you can publish stuff for other people to read, to posting stuff on which other people could comment, to now where it really is a medium for taking what you know and packaging it in a way that makes it available for other people to use.
There was a peer research report that just came out recently that said, you've got to stop thinking of the World Wide Web as the world's library and start to think of it as the world's supercomputer.
Peter Coffee: That's really what has fascinated me in all of the areas where I've been fortunate to have the opportunity to work, is that we have tremendous facilities available to us now. Not to look at one or two isolated blank ideas about what might work and then pick the one that works the least badly. But really to start mapping and entire spaces of solutions that are available to problems and really being to hunt for something close to optimality on how we get stuff done. I think that's what's really exciting in every field today, is the opportunity to pursue optimality instead of settling for incremental improvement.
Noel Wurst: It's definitely an exciting time to be in that field with all the different capabilities that are popping up day by day and month by month. I noticed that the session you’re giving at the SDLC Acceleration Summit—the session or panel that you're going to speaking on is titled, “Faster In What Direction?”
Peter Coffee: Yes.
Noel Wurst: The abstract warns specifically against “accelerating a legacy model or mistaking it for progress.”
Peter Coffee: Yes.
Noel Wurst: And, like you were just saying, with everything changing so much and changing so quickly, I was wondering if maybe there is an enterprise or a company that is changing quickly—what are maybe some of the warning signs that should maybe alert themselves that maybe they’re not moving in the right direction? Or maybe they are changing quickly, but maybe it's not for the best.
Peter Coffee: The interesting thing about the place that's loosely called “Pod Computing” is that it's very much like a supermarket in which you can walk in, and if you know what you're there to buy—it's there. If you go to the rice and beans aisle you'll find sacks of ingredients at an attractive price. And you'll take them home and you'll have to do a lot of work before you can put the food on the table. But if it occurs to you to look in another direction, well over there's the deli counter where there's some very interesting things that you might not even attempt to create yourself, that are ready for you to consume pretty much as soon as you get them home and pop them in the microwave.
That's really where the cloud marketplace is today. You've got the infrastructure cloud of virtual servers that you can spin up by the minute or the hour and do traditional skills intensive, error prone, complex and innovation limited IT at an attractive cost.
Then you've got the platform as a service and software as a service markets where you can find tremendously innovative best practices solutions available for you to use immediately and modify and tailor to become uniquely yours. I think that's the biggest caveat that I have to offer people who are attracted by the obvious economies and accelerations that virtualizing legacy IT can provide, and who think that that constitutes victory when they aren't realizing that the next logical step is to move beyond wiring up their virtual machines the way they wired up their physical ones. Instead say, “well what if I were thinking about what I do? What if I was thinking about application construction as composition and orchestration of services and the linkage at the API level, instead of composition of hardware and linkage at the binary data transfer level?”
Noel Wurst: That's a really great analogy, the “rice and beans aisle” that you said before. That's exactly what the situation is. There's all these options out there for people to change for the better, and to be able to accelerate their entire business, or the SDLC. But at the same time, it's still really difficult for people to change. Even if they see the success that other companies have had by changing and by reinventing their processes.
But I was kind of curious as to ... I know why it's hard for people to change. But what are some of the things, the reasons that those who are still maybe embracing somewhat outdated technologies. What are some of the reasons that they shouldn't fear that change? Besides just, “it'll get better” or “it's going to be great in the end.” The ones where that's not enough to convince someone to make that kind of investment or make that kind of change. What are some reasons to not fear that path or completely different direction for their business?
Peter Coffee: It's perfectly logical for people who've invested years or decades in mastering a set of skills to seek reasons to believe that those skills are going to continue to be the definition of their value and the source of their livelihood to the rest of what they hope will be a healthy and lucrative career. It’s completely logical for them to look for that validation. We know from any number of different research fields, that people will find confirming evidence for what they want to believe, and will be remarkably successful, even if unconsciously ignoring disconfirming evidence. So I completely get this and I don't attribute bad motives to anybody.
Noel Wurst: Right.
Peter Coffee: What I do find is that there are some organizations that are starting to look at this as much more of an opportunity to have greater leverage and to catalyze dramatic improvements. For example, I was just at the headquarters at one of our customers, USAA, which is a massive insurance company that primarily sell their services to military and ex-military. You've seen their advertising.
Their annual technology forum, their in-house event, has 3,000 people attending it. It’s the size of one of our larger metropolitan events. They asked me to come and do a seminar on how the IT department could best serve, what they call, “the citizen developer”—which I believe might be a phrase that Forrester might have been the first to popularize. If you think about the way that the spreadsheet allowed people to model and experiment with a process, instead of having to go hat in hand to the business analyst, and beg for some COBOL to do that work.
It's the next logical step to go beyond the spreadsheet on a network share with an email thread wrapped around it to something like a Force.com app that runs in the cloud, has proper security, auditability, governability, back-up built in as a service, and the IT department can tremendously increase its value to the business by providing what we might call adult supervision to the citizen developer.
To provide guidance on compliance, process integrity, data dictionary discipline and so on. And the IT departments that are starting to say, “You know, tugging the old workload and keeping that to ourselves is a poor strategy for increasing our value to the organization compared to being vigorous and creative in providing support to the business units.” In taking advantage of the lower barriers to entry and the considerably higher productivity they can have by building what will be a much more disciplined application.
Everyone talks about Shadow IT today. It's this dark matter of the IT universe that doesn't shine with its own light and it's very hard to find it. But we all know it's out there stretching from desktop databases and other things that are not visible, not governed, not recalled, not auditable, and not really contributing to a store of knowledge that turns the company into a learning organization. The IT departments that I think are doing very well, and there are existing groups of these, they’re saying, “Our best contribution is to take things that a very difficult to learn and very easy to do wrong and still have an enduring value like basic disciplines of process integrity and find a way to sprinkle those on top of the old coffee of the legacy infrastructure and turn it into something that's much more exciting.”
Noel Wurst: That’s so great. It reminds me, I also had the chance to interview Theresa Lanowitz, who's also going to speaking at The SDLC Acceleration Summit, and we were talking specifically about the “extreme automation” that comes from utilizing some of these new technologies and it's like you were just saying as far as the way things were done in the past.
The definition she had of extreme automation, I asked her to define it, was “solving classic problems with new technology and tooling.” Essentially saying people who are maybe of skeptical of automation, or are very quick to say, “Well automation doesn't solve everything,” basically she was saying automation doesn't solve everything, but at the same time, it's not solving anything new that you haven't seen before. It's solving a problem that may have existed in your organization for years. It may have been problems that is kind of just always laid around because there wasn't extreme automation there to help solve those.
I was curious if you would maybe define extreme automation the same way or basically see the same value that it solves the problems that are well-known, well documented and widespread across multiple enterprises.
Peter Coffee: Well, I don't know if you remember, there was a minor sub-plot in the book of Jurassic Park that I think they decided was too complicated to bring forward into the movie, about how the eye of the frog doesn't even bother to tell the brain about things that aren't moving. Because the brain wants to focus on things that are moving and therefore might be targets for eating. The eye doesn't even bother to tell them about the stationary things. I think a lot of us are really good at ignoring stationary problems that have been with us for so long that we stop even to think of them even as problems and just think of them as part of the environment.
I like the phrase of “extreme automation” because it invites us to go and, pardon the expression, rip the scabs off some of those old wounds and say, “Can we afford intellectually and financially to reexamine some of our assumptions about things that we're just going to have to tolerate. Because maybe we don't have to tolerate them anymore.” That can require some creativity and it can also require taking some risk. Because you're going to be bringing these ideas to senior managers who may have built their careers on the existence of these “problems” and of the construction of elaborate and complicated work arounds to these problems. And you're essentially coming in and saying, “Yeah but, what if we stopped calling that problem something that we have to tolerate at all? What if we just destroy it? What if we just cut the Gordian knot, to use another old metaphor. Instead of spending tons of effort, time, and money untangling something—we just cut it in half. That's a career risk and an intellectual challenge and a skills and technology challenge.
But there is always something going on at any given time that forces that kind of disruption. Back in the 1960s everybody was worried about the space race. So we had projects Apollo, which did some things at the time that were considered frankly impossible until people felt that they had to find a way to do them. Then they discovered ways to do it. I think our next project Apollo is the aging of the baby boomers who are going to need adaptive and assistive technology. To make it possible for them to age gracefully at home instead of going into nursing homes which we simply won't be able to afford to build at that kind of scale.
So things like a Nest thermostat are really just kind of the tip of that spear in terms of saying, “You know, we need to rethink automation from something that lets people program things to do stuff into devices and algorithms and systems that are aware of an environment, learn what's normal, call attention to that which is abnormal and therefore perhaps requires someone to do something—and that's really where we need to go with business IT as well. It’s to get beyond the idea of the bigger and bigger dashboard with more and more performance indicators and get to much smarter systems that notice when things that have been behaving in a coordinated way suddenly don't seem to be coordinating. That suggests an anomaly that detects for data science disciplines what I might call “pre-failure signatures” and are able to say, “I don't know why this is happening. But I know that when it does happen, a week later something much much worse usually happens. So then you'd want to take a look at this now.”
This is a very interesting time for developing new categories of algorithms that don't just capture the byproduct of business activity, but inspect those patterns and look for interesting high leverage points that allow people to take an inexpensive action at the right time instead of a much more costly disaster recovery or damage control action later on.
Noel Wurst: That's all so great. And then lastly, it's all kind of come to this. It's interesting, all of this technology we've been discussing of course involves the cloud. But I watched a video on YouTube recently of a presentation you did at PhillyForce last year that was titled “Connecting Above the Cloud.”
It's funny, I'm writing an article this week about the same thing, and I'm going to use a piece of your your presentation in the article. You talked about the cloud metaphor and how the industry didn't choose that term. You said that “connected” and “social” work a lot better than just saying “the cloud.” To quote you from that presentation, you said, “The cloud is only interesting because of the connections that it enables.”
That's really interesting, because I think we've only said the word cloud two times in this whole interview. Back in the day, “cloud” was in every other sentence. I feel like even though cloud adoption of course is going up, and all kinds of news came out this week about how it’s ramping up quickly—yet, we're saying the word “cloud” less and less. I really like the way that successful enterprises have been able to enable these connections between their customers.
But I also like how much the cloud, to go ahead and say it again, has enabled developers, testers, IT, DevOps, and all this connectivity that goes on kind of behind the curtain, not just among the people who are kind of using these apps. I was kind of wondering if you might maybe expand on how this technology is helping enterprises themselves, but not just not just the consumers who use these apps.
Peter Coffee:Sure. Well, you know it's almost an accident of history that Intel had a commercial imperative to develop the integrated circuit and the microprocessor before it became practical to have a global standard spaced wired and wireless network. Because we had a period of several years, a little over a decade really, when we went from the 4004 microprocessor, which was barely enough to run a four-function calculator, to things like 32bit computers on a chip.
During that time connectivity was expensive, slow, and intermittent. I compared it the other day to being colonist on Mars with our little bubbles of air and having to put on a spacesuit to go from one to another. Inside that bubble which was in the IT world, inside your data center or inside your local area network, you could be reasonably comfortable. But as soon as you wanted to do something that involved going from one bubble to another, well this was a perilous exercise. You had to use your dial-up modem or it was like putting on your spacesuit for a brief moment of interconnection with another bubble somewhere else.
During this time, we've essentially been terraforming planet computing. We've got an atmosphere now that you can breathe without putting on a spacesuit and if there's still people walking around with air tanks on their banks, that they call their “private cloud.” Okay, and it doesn't make any more sense in the world of IT then it would make sense in a world of a terraformed Mars. To ignore the fact that the environment now is different. The environment is now of ubiquitous connectivity with what was comparable with what we used to find more than sufficient in an office building. You can see this in the behavior of people. They are saying, “Now wait a minute, why would I buy a laptop computer, which is essentially a spacesuit? It's its own hard drive, display, battery, keyboard all this stuff and all. All I really need is a little magic piece of black glass which might be the size of a phone or it might be the size of a tablet. All I need is that thing which is a window into this world of available information, computational capability, connection with other experts, connection with algorithms and supercomputing facilities.”
There's an awful lot of stuff that used to sort of make sense to build more and more powerful desktop machines to do. But people are now saying, “But wait a minute. I don't actually want to do that on my desktop at all. I don't want to video editing on my desktop, I want to upload the clip from my smartphone to YouTube and let YouTube worry about things like compression algorithms and making it run on different devices and things like that.” This is a hard challenge. It's to get beyond thinking of the cloud as a product and start thinking of the cloud as just an enabler for things that are much more interesting products.
Because if I can drag one more metaphor in, I once said, a gourmet chef does not talk about the miracle of clean water being available from a faucet whenever he needs it. He assumes that's going to be there because you can't really talk about doing cooking without it. But once you've got it, you don't really think about it very much. You don't think about the miracle of electricity when you're building your home theater. That's invisible. That's assumed and you're thinking instead about what's the amazing experience I can that create given the assumption of that pervasive reliable cost-effective resource.
It's really important to get beyond cloud and start thinking about, “Wait a minute, what do I have to do to step up my game? To be a value creator by using this medium of connection, by using processing power and algorithms of discovery and analysis, to create a kind of customer experience, or the kind of government first-response in a disaster, or the kind of life-long delivery of education that were never really feasible to discuss until we had this remarkable atmosphere in which we breathe data and breathe computational power whenever we need it.”
Noel Wurst: Well, that is all I have for you today. I am really looking forward to attending your presentation at The Summit as well as the other ones that are going on there. Again, everyone, this is Peter Coffee, the VP for Strategic Research at salesforce.com. Peter speaks all over the world, and he’s going to be at the SDLC Acceleration Summit in San Francisco, California on May 13th. Thank you so much for speaking with me today.
Peter Coffee: Thank you very much. I always tell people I never know what I think until I have to answer questions. So, these conversations always tell me things that I didn't realize I was thinking about until I have them. Thank you for the time.
Noel Wurst: Awesome, thank you.
Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics. In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, discussed using predictive analytics to monitor and adjust functions like performance, capacity, caching, security, optimization, uptime and service levels; identify trends or patterns to forecast future requirements; detect problems before they result in failures or downtime; and convert insight into actions like changing policies, storage tiers, or DR strategies.
Dec. 5, 2016 09:00 PM EST Reads: 4,948
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development process, accelerate application delivery times, and ensure that developers will become heroes (not bottlenecks) in the IoT revolution.
Dec. 5, 2016 08:45 PM EST Reads: 421
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no idea how to get a proper answer.
Dec. 5, 2016 07:45 PM EST Reads: 2,215
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
Dec. 5, 2016 07:15 PM EST Reads: 2,060
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @DevOpsSummit 19th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists will explore this emerging use of Big Data generated by the digital business to complete the DevOps feedback loop, and inform operational and application decisions.
Dec. 5, 2016 07:15 PM EST Reads: 363
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud using both containers and VMs.
Dec. 5, 2016 07:00 PM EST Reads: 1,812
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Dec. 5, 2016 07:00 PM EST Reads: 1,845
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Dec. 5, 2016 06:15 PM EST Reads: 1,554
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of SolidFire, discussed how to leverage this concept to seize on the creativity and business agility to make it real.
Dec. 5, 2016 05:15 PM EST Reads: 1,690
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and on the other side, organizations that will find themselves as roadkill on the technology highway.
Dec. 5, 2016 02:45 PM EST Reads: 3,285
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
Dec. 5, 2016 02:00 PM EST Reads: 2,207
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of the Progress Corticon and Rollbase businesses, discussed and provided a deep understanding of the low-code application platforms that address these concerns.
Dec. 5, 2016 01:45 PM EST Reads: 1,711
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Dec. 5, 2016 01:15 PM EST Reads: 2,167
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" programs available to startups and innovators.
Dec. 5, 2016 12:30 PM EST Reads: 959
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 5, 2016 11:15 AM EST Reads: 965
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how Docker and Kubernetes reduce software delivery cycle times, drive automation, and increase efficiency How other organizations are using DevOps + containers and how to replicate their success
Dec. 5, 2016 10:15 AM EST Reads: 983
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to deliver.
Dec. 5, 2016 09:15 AM EST Reads: 1,431
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 5, 2016 09:15 AM EST Reads: 918
"We are a modern development application platform and we have a suite of products that allow you to application release automation, we do version control, and we do application life cycle management," explained Flint Brenton, CEO of CollabNet, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 5, 2016 08:45 AM EST Reads: 845
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereum.
Dec. 5, 2016 07:30 AM EST Reads: 7,084
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that enables everyone from Planning-to-Ops to make informed decisions based on business priority and leverage automation to accelerate identifying issues and fast fix to drive continuous feedback and KPI insight.
Dec. 5, 2016 06:45 AM EST Reads: 1,020
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MySQL cluster as a Kubernetes application.
Dec. 5, 2016 04:30 AM EST Reads: 5,259
"We are an all-flash array storage provider but our focus has been on VM-aware storage specifically for virtualized applications," stated Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 5, 2016 03:30 AM EST Reads: 762
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organizations must focus on what is most relevant to deliver value, reduce IT complexity, create more repeatable agile-based processes and leverage increasingly secure and stable, cloud-based infrastructure platforms.
Dec. 4, 2016 11:15 AM EST Reads: 5,781
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how developers and operators work together to streamline cohesive systems.
Dec. 4, 2016 04:45 AM EST Reads: 5,047