Click here to close now.

Welcome!

@DevOpsSummit Authors: Elizabeth White, Tom Lounibos, Pat Romanski, Liz McMillan, Tim Hinds

Related Topics: @DevOpsSummit, @MicroservicesE Blog, Microsoft Cloud, Open Source Cloud, Containers Expo, @CloudExpo, SDN Journal

@DevOpsSummit: Article

Tackling the Need for Speed and Agility

Network virtualization eases developer and operations snafus in the mobile and cloud era

As developers are pressured to produce mobile and distributed cloud apps ever faster and with more network unknowns, the older methods of software quality control can lack sufficient predictability.

And as Agile development means faster iterations and a constant stream of updates, newer means of automated testing of the apps in near-production realism prove increasingly valuable.

Fortunately, a tag-team of service and network virtualization for testing has emerged just as the mobile and cloud era requires unprecedented focus on DevOps benefits and rapid quality assurance.

BriefingsDirect had an opportunity to learn first-hand how Shunra Software and HP have joined forces to extend the capabilities of service virtualization for testing at the recent HP Discover 2013 Conference in Barcelona.

Learn here how Shunra Software uses service virtualization to help its developer users improve the distribution, creation, and lifecycle of software applications from Todd DeCapua, Vice President of Channel Operations and Services at Shunra Software, based in Philadelphia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There are a lot of trends affecting software developers. They have mobile on their minds. They have time constraints issues. They have to be faster, better, and cheaper along the apps lifecycle way. What among the trends is most important for developers?

DeCapua

DeCapua: One of the biggest ones -- especially around innovation and thinking about results, specifically business results -- is Agile. Agile development is something that, fortunately, we've had an opportunity to work with quite a bit. Our capabilities are all structured around not only what you talked about with cloud and mobile, but we look at things like the speed, the quality, and ultimately the value to the customers.

We’re really focusing on these business results, which sometimes get lost, but I try to always go back to them. We need to focus on what's important to the business, what's important to the customer, and then maybe what's important to IT. How does all that circle around to value?

Gardner: With mobile we have many more networks, and people are grasping at how to attain quality before actually getting into production. How does service virtualization come to bear on that?

Distributed devices

DeCapua: As you look at almost every organization today, something is distributed. Their customers might be on mobile devices out in the real world, and so are distributed. They might be working remotely from home. They might have a distribution center or a truck that has a mobile device on it.

There are all these different pieces. You’re right. Network is a significant part that unfortunately many organizations have failed to notice and failed to consider, as they do any type of testing.

Network virtualization gives you that capability. Where service virtualization comes into play is looking at things like speed and quality. What if the services are not available? Service virtualization allows you to then make them available to your developers.

In the early stage, where Shunra has been able to really play a huge difference in these organizations is by bringing network virtualization in with service virtualization. We’re able to recreate their production environments with 100 percent scale -- all prior to production.

When we think about the value to the business, now you’re able to deliver the product working. So, it is about the speed to market, quality of product, and ultimately value to your customer and to your business.

Gardner: And another constituency that we should keep in mind are those all-important operators. They’re also dealing with a lot of moving parts these days -- transformation, modernization, and picking and choosing different ways to host their data centers. How do they fit into this and how does service virtualization cut across that continuum to improve the lives of operators?

Service virtualization and network virtualization can benefit them is by being able to recreate these scenarios.

DeCapua: You’re right, because as the delivery has sped up through things like Agile, it's your operations team that is sitting there and ultimately has to be the owners of these applications. Service virtualization and network virtualization can benefit them by being able to recreate these in-production scenarios.

Unfortunately, there are still some reactive actions required in production today, so you’re going to have a production incident. But, you can now understand the network in production, capture those conditions, and recreate that in the test environment. You can also do the same for the services.

We now have the ability to quickly and easily recreate a production incident in a prior-to-production environment. The operations team can be part of the team that's fixing it, because again, the ultimate question from CIOs is, “How can you make sure this never happens again?”

We now have the way to quickly and confidently recreate incidents and fix it the first time, not having to change code in production, on the fly. That is one of the scariest moments in any of the times when I've been at the customer site or when I was an employee and had to watch that happen.

Agile iterations

Gardner: As you mentioned earlier, with Agile we’re seeing many more iterations on applications as they need to be rapidly improved or changed. How does service and network virtualization aid in being able to produce many more iterations of an application, but still maintain that high quality?

DeCapua: One of our customers actually told us that -- prior to leveraging network virtualization with service virtualization -- he was doing 80 percent of his testing in-production, simply because he knew the shortcomings, and he needed to test it, but he had no way of re-creating it. Now, let's think about Agile. Let's think about how we shift and get the proven enterprise tools in the developer’s hands sooner, more often, so that we can drive quality early in the process.

That's where these two components play a critical role. As you look at it more specifically and go just a hair deeper, how in integrated environments can you provide continuous development and continuous deployment? And with all that automated testing that you’re already doing, how you can incorporate performance into that? Or, as I call it, how do you “build performance in” from the beginning?

As a business person, a developer, a business analyst, or a Scrum Master, how is it that you’re building performance into your user scenarios today? How is it that you’re setting them up for understanding how that feature or function is going to perform? Let's think about it as we’re creating, not once we get two or three sprints into use and we have our hardening sprint, where we’re going to run our performance scenario. Let's do it early, and let's do it often.

Get the proven enterprise tools in the developer’s hands sooner, more often, so that we can drive quality early in the process.

Gardner: If we’re really lucky, we can control the world and the environment that we live in, but more often than not these days, we’re dealing with third-party application programming interfaces (APIs). We’re dealing with outside web services. We have organizational boundaries that are being crossed, but things are happening across that boundary that we can't control.

So, is there a benefit here, too, when we’re dealing with composite applications, where elements of that mixed service character are not available for your insight, but that you need to be able to anticipate and then react quickly should a change occur?

DeCapua: I can't agree with you more. It’s funny, I am kind of laughing here, Dana, because this morning I was riding the metro in Barcelona and before I got to the stop here, I looked down to my phone, because I was expecting a critical email to come in. Lo and behold, my phone pops up a message and says, “We’re sorry, service is unavailable.”

I could clearly see that I had one out of five bars on the Orange network, and I was on the EDGE network. So, it was about a 2.5G connection. I should still have been able to get data, but my phone simply popped up and said, “Sorry, cannot retrieve email because of a poor data connection.”

I started thinking about it some more, and as I was engaging with other folks today at the show, I asked them why is it that the developer of the application found it necessary to alert me three times in a row that it couldn’t get my email because of a poor data connection? Why didn’t it just not wait 30 seconds, 60 seconds, 90 seconds until it did, and then have it reach out and query it again and pull the data down?

Changing conditions

This is just one very simple example that I had this morning. And you’re right, there are constantly changing conditions in the world. Bandwidth, latency, packet loss and jitter are those conditions that we’re all exposed to every day. If you’re in a BMW driving down the road at 100 miles per hour, that car is now a mobile phone or a mobile device on wheels, constantly in communication. Or if you’re riding the metro or the tube and you have your mobile device on your hands, there are constantly changing conditions.

Network virtualization and service virtualization give you the ability to recreate those scenarios so that you can build that type of resiliency into your applications and, ultimately, the customers have the experience that you want them to have.

Gardner: Todd, tell us about so-called application-performance engineering solutions?

DeCapua: So, application performance engineering (APE) is something that was created within the industry over a number of years. It's meant to be a methodology and an approach. Shunra plays a role in that.

A lot of people had thought about it as testing. Then people thought about it as performance testing. At the next level, many of us in the industry have defined it is application engineering. It’s a lot more than just that, because you need to dive behind the application and understand the in’s and the out’s. How does everything tie together?

Understanding APE will help you to reduce those types of production incidents.

You’d mentioned some of the composite applications and the complexities there -- and I’m including the endpoints or the devices or mobile devices connecting through it. Now, you introduce cloud into the equation, and it gets 10 times worse.

Thinking about APE, it's more of an art and a skill. There is a science behind it. However, having that APE background knowledge and experience gives you the ability to go into these composite apps, go into these cloud deployments, and leverage the right tools and the right process to be able to quickly understand and optimize the solutions.

Gardner: Why aren’t the older scripting and test-bed approaches to quality control good enough? Why can't we keep doing what we've been doing?

DeCapua: In the United States recently, October 1 of 2013, there was a large healthcare system being rolled out across the country. Unfortunately, they used the old testing methodologies and have had some significant challenges. HP and Shunra were both engaged on October 2 to assist.

Understanding APE will help you to reduce those types of production incidents. All due to inaccurate results in the test environment, using the current methodologies, about 50 percent of our customers come to us in a crisis mode. They say, “We just had this issue, I know that you told us this is going to happen, but we really need your help now.”

They’re also thinking about how to shift and how to build performance in all these components -- just have it built in, have it be automatic, and get the results that are accurate.

Coming together

Gardner: Of course HP has service virtualization, you have network virtualization. How are they coming together? Explain the relationship and how Shunra and HP work together?

DeCapua: To many people's surprise, this relationship is more than a decade old. Shunra’s network-virtualization capability has, for a long time, been built in to HP LoadRunner, also is now being built into HP Performance Center.

There are other capabilities that we have that are built into their Unified Functional Testing (UFT) products. In addition, within service virtualization, we’re now building that product into there. It’s one that, when you think about anything that has some sort of distribution or network involved, network virtualization needs to come into play.

Some people have a hard time initially understanding the service virtualization need, but a very simple example I often use is an organization like a bank. They’ll have a credit check as you’re applying for a loan. That credit check is not going to be a service that the bank creates. They’re going to outsource it to one of the many credit-check services. There is a network involved there.

In your test environment, you need to recreate that and take that into consideration as a part of your end-to-end testing, whether it's functional, performance, or load. It doesn’t matter.

In your test environment, you need to recreate that and take that into consideration as a part of your end-to-end testing, whether it's functional, performance, or load.

As we think about Shunra, network virtualization and the very tight partnership that we've had with HP for service virtualization, as well as their ability to virtualize the users, it's been an OEM relationship. Our R and D teams sit together as they’re doing the development so that this is a seamless product for the HP customer to be able to get the benefit and value for their business and for their customers.

Gardner: Let's talk a little bit about what you get when you do this right. It seems to me the obvious point is getting to the problem sooner, before you’re in production, extending across network variables, across other composite application-type variables. But, I’m going to guess that there are some other benefits that we haven't yet hit on.

So, when you've set up you're testing, when you have virtualization as your tool, what happens in terms of paybacks?

DeCapua: There are many benefits there, which we have already covered. There are dozens more that we could get into. One that I would highlight, being able to pull all the different pieces that we've been talking about, are shorter release times.

TechValidate did a survey in February of 2013. The findings were very compelling in that they found a global bank was able to speed up their deployment or application delivery by 30 to 40 percent. What does that mean for that organization as compared to their competitor? If you can get to market 30 to 40 percent faster, it means millions or billions of dollars over time. Talk about numbers of customers or brands, it's a significant play there.

Rapid deployment

There are other things like rapid deployment. As we think about Agile and mobile, it's all about how fast we get this feature function out, leveraging service virtualization in a greater way, and reducing associated costs.

In the example that I shared, the customer was able to virtualize the users, virtualize the network, and virtualize the services. Prior to that, he would never have been able to justify the cost of rebuilding a production environment for test. Through user virtualization, network virtualization, and service virtualization, he was able to get to 100 percent at a fraction of the cost.

Time and time again we mention automation. This is a key piece of how you can test early, test often, ultimately driving these accurate results and getting to the automated optimization recommendations.

Gardner: What comes next in terms of software productivity? What should organizations be thinking in terms of vision?

Slow down

DeCapua: I see Agile, mobile, and cloud. There are some significant risks out in the marketplace today. As organizations look to leverage these capabilities to benefit their business and the customers, maybe they need to just slow down for a moment and not create this huge strategy, but go after “How can I increase my revenue stream by 20 percent in the next 90 days?” Another one that I've had great success with is, “What is that highest visibility, highest risk project that you have in your organization today?”

As I look at The Wall Street Journal, and I read the headlines everyday, it's scary. But, what's coming in the future? We can all look into our crystal balls and say that this is what it is. Why not focus on one or two small things of what we have now, and think about how we’re mitigating our risk of  looking at larger organizations that are making commitments to migrate critical applications into the cloud?

You’re biting off a fairly significant risk, which that there isn’t a lot there to catch you when you do it wrong, and, quite frankly, nearly everybody is doing it wrong. What if we start small and find a way to leverage some of these new capabilities? We can actually do it right, and then start to realize some of the benefits from cloud, mobile, and other channels that your organization is looking to.

Gardner: The role of software keeps increasing in many organizations. It's becoming the business itself and, as a fundamental part of the business, requires lots of tender love and care.

The more that we can think about that and tune ourselves and make ourselves lean and focused on delivering better quality products, we’re going to be in the winning circle more often.

DeCapua: You got it. The only other bit that I would add on to that is looking at the World Quality Report that was presented this morning by HP, Capgemini, and Sogeti, they highlighted that there is an increased spend from the IT budget, and a rather significant increase in spend from last year in testing.

It’s exactly what you’re saying. Organizations didn’t enter the market thinking of themselves as a software house. But time and time again, we’re seeing how people who treat what they do as a software house ultimately is improving not only life for their internal customers, but also their external customers.

So I think you’re right. The more that we can think about that and tune ourselves and make ourselves lean and focused on delivering better quality software products, we’re going to be in the winning circle more often.

You may also be interested in:

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

@DevOpsSummit Stories
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of DevOps with containers. In addition, he will discuss known issues and solutions for enterprise appl...
IT data is typically silo'd by the various tools in place. Unifying all the log, metric and event data in one analytics platform stops finger pointing and provides the end-to-end correlation. Logs, metrics and custom event data can be joined to tell the holistic story of your software and operations. For example, users can correlate code deploys to system performance to application error codes. In his session at DevOps Summit, Michael Demmer, VP of Engineering at Jut, will discuss how this can only work if the underlying analytics platform is flexible and powerful enough to handle the variou...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attendees will understand how different components work together to solve the problems to manage applicatio...
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult – let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and live demonstrations of each method. Special emphasis will be put on sysdig, an open source troubleshoot...
Agile, which started in the development organization, has gradually expanded into other areas downstream - namely IT and Operations. Teams – then teams of teams – have streamlined processes, improved feedback loops and driven a much faster pace into IT departments which have had profound effects on the entire organization. In his session at DevOps Summit, Anders Wallgren, Chief Technology Officer of Electric Cloud, will discuss how DevOps and Continuous Delivery have emerged to help connect development with IT operations (mainly through the introduction of Automation) to support and amplify a...
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
"We have a tagline - "Power in the API Economy." What that means is everything that is built in applications and connected applications is done through APIs," explained Roberto Medrano, Executive Vice President at Akana, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. The problem is there are a lot of moving parts in these designs; this makes assuring performance com...
"We help to transform an organization and their operations and make them more efficient, more agile, and more nimble to move into the cloud or to move between cloud providers and create an agnostic tool set," noted Jeremy Steinert, DevOps Services Practice Lead at WSM International, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration. Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling.
SYS-CON Events announced today that JFrog, maker of Artifactory, the popular Binary Repository Manager, will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based in California, Israel and France, founded by longtime field-experts, JFrog, creator of Artifactory and Bintray, has provided the market with the first Binary Repository solution and a software distribution social platform.
The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend the time metric, the DevOps cadence reinvents project scope, and cost metrics expand past software ...
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
The most often asked question post-DevOps introduction is: “How do I get started?” There’s plenty of information on why DevOps is valid and important, but many managers still struggle with simple basics for how to initiate a DevOps program in their business. They struggle with issues related to current organizational inertia, the lack of experience on Continuous Integration/Delivery, understanding where DevOps will affect revenue and budget, etc. In their session at DevOps Summit, JP Morgenthal, Sr. Principal with CSC, and Mike Kavis, Vice President & Principal Cloud Architect at Cloud Techno...
"We provide a web application framework for building really sophisticated web applications that run on a browser without any installation need so we get used for biotech, defense, and banking applications," noted Charles Kendrick, CTO and Chief Architect at Isomorphic Software, in this SYS-CON.tv interview at @DevOpsSummit (http://DevOpsSummit.SYS-CON.com), held June 9-11, 2015, at the Javits Center in New York
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
SYS-CON Events announced today that BMC will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. BMC delivers software solutions that help IT transform digital enterprises for the ultimate competitive business advantage. BMC has worked with thousands of leading companies to create and deliver powerful IT management services. From mainframe to cloud to mobile, BMC pairs high-speed digital innovation with robust IT industrialization – allowing customers to provide amazing user experiences with optimized IT per...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible a...
Mashape is bringing real-time analytics to microservices with the release of Mashape Analytics. First built internally to analyze the performance of more than 13,000 APIs served by the mashape.com marketplace, this new tool provides developers with robust visibility into their APIs and how they function within microservices. A purpose-built, open analytics platform designed specifically for APIs and microservices architectures, Mashape Analytics also lets developers and DevOps teams understand which APIs are used most frequently, from what endpoints and by which paying customers, so they can p...
Sumo Logic has announced comprehensive analytics capabilities for organizations embracing DevOps practices, microservices architectures and containers to build applications. As application architectures evolve toward microservices, containers continue to gain traction for providing the ideal environment to build, deploy and operate these applications across distributed systems. The volume and complexity of data generated by these environments make monitoring and troubleshooting an enormous challenge for development and operations teams. The Sumo Logic Collector and Application for Docker now a...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you'll have no problem fillin...
IBM is delivering of enterprise class containers that make it easier for clients to deliver production applications across their hybrid environments. Containers give developers flexibility to build once and move applications without the need to rewrite or redeploy their code. IBM Containers, based on Docker and built on Bluemix, IBM's platform-as-a-service, provide a more efficient environment that enables faster integration and access to analytics, big data and security services. Enterprises will now be able to use the combination of IBM, Docker, Cloud Foundry and OpenStack to create a new ...
A broad coalition of industry leaders and users are joining forces to create the Open Container Project (OCP), chartered to establish common standards for software containers. Housed under the Linux Foundation, the OCP’s mission is to enable users and companies to continue to innovate and develop container-based solutions, with confidence that their pre-existing development efforts will be protected and without industry fragmentation. As part of this initiative, Docker will donate the code for its software container format and its runtime, as well as the associated specifications. The leader...
SYS-CON Events announced today that the "Second Containers & Microservices Conference" will take place November 3-5, 2015, at the Santa Clara Convention Center, Santa Clara, CA, and the “Third Containers & Microservices Conference” will take place June 7-9, 2016, at Javits Center in New York City. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.