Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Microsoft Cloud, @CloudExpo

@DevOpsSummit: Blog Post

Deployment Performance Health Checks By @GrabnerAndi | @DevOpsSummit [#DevOps]

Deployment related performance health problems that I always check when looking at a SharePoint Installation

Five SharePoint Deployment Performance Health Checks: Beyond Making Sure It's Running

In my first blog I wrote about SharePoint System Performance Health Checks beyond looking at CPU and Memory Metrics. In this blog, I cover deployment related performance health problems that I always check when looking at a SharePoint Installation. Especially after deploying new hardware, new sites, pages, views, custom or third-party Web Parts (e.g., from AvePoint, K2, Nintex, Metalogix, etc.) it's important to perform certain deployment sanity checks. While you may have nobody reporting issues in the moment there are several areas that you constantly need to check before they become a real problem.

Feel free to follow all my steps by either using your own tools or use Dynatrace Free Trial with our SharePoint FastPack.

Step #1: Optimize Connectivity Between Services
My first step is to analyze which components are involved when I navigate through SharePoint. Looking at the Transaction Flow (from Browser to Database) allows me to answer some key questions:

  • How much load is coming in and is that distributed correctly across my IIS Instances?
  • How many requests are actually making it to the SharePoint AppPools?
  • Which external services are we calling and how does that impact our response time?
  • Which databases are accessed and does that impact performance?

Transaction Flow allows me to understand how a request flows through the system, which servers, sites, databases and external services are involved and where there might be a bottleneck

Step #2: Resolve Any HTTP 4xx & 5xx
Often overlooked problems are deployment mistakes that lead to HTTP Errors. JavaScript files or images that are not correctly deployed can result in broken functionality on your SharePoint pages. Even though end users may not complain, these issues undermine design and negatively impact usability. Looking at your HTTP Response Codes allows you to understand which resources are currently not being correctly served.

Analyze which requests result in HTTP errors and therefore impact your end users. If they are deployment related, fix them before they impact too many of your users

Step #3: Eliminate Bad Web Parts
Third-party (e.g.,from AvePoint, K2, Nintex, Metalogix...) or custom developed Web Parts are heavily used in SharePoint installations. But what if they don't work because you miss a configuration setting or the deployment went wrong? I always do a sanity check by looking at:

  • Exceptions happening during loading of a Web Part assembly. This tells me I made a deployment mistake.
  • Exceptions happening during execution of a Web Part when a page gets rendered, as it typically indicates a configuration issue of the person that put that Web Part on that page
  • Web Parts that that have very long execution times consume a lot of CPU or Memory

When a Web Part is not correctly deployed SharePoint will throw exceptions like the one above, end users will only see a blank area

Configuration mistakes in Web Part settings can cause it to fail or run slow. Watch out for exceptions or slow executions triggered by Web Parts. Learn which page has this problem and fix it

For steps 4 & 5, click here for the full article

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.
"DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at @DevOpsSUMMIT and CloudEXPO tell the world how they can leverage this emerging disruptive trend."