Welcome!

@DevOpsSummit Authors: Carmen Gonzalez, Zakia Bouachraoui, Elizabeth White, Liz McMillan, Yeshim Deniz

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, Agile Computing, @CloudExpo

@DevOpsSummit: Article

DevOps to Drive Business | @DevOpsSummit @Logzio #DevOps #Microservices

Marketers and DevOps engineers will need to work together to use the data to solve certain problems

DevOps Is Changing from Solving Problems to Driving Business

DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics.

Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here's why: Server log files contain the only data that is completely full and accurate in the context of how search engines such as Google are crawling websites.

If a search engine spider encounters an error and does not load a page, the webmaster does not know because traditional traffic analytics tools such as Google Analytics do not track those issues. Log file data, on the other hand, does reveals what problems bots are encountering on a website - and many of those issues can hurt a site's appearance and rankings in Google.

Too many response code errors can lead Google to cut the rate at which it crawls your company's website. You want to monitor and confirm that search engines are crawling everything that you want to appear in public search results (everything else should be blocking search engine bots). When pages are assigned new URLs, it's important that the redirection will refer incoming links appropriately.

What Is SEO?
Contrary to what too many charlatans still proclaim (and unfortunately too many people still believe), "SEO" is not a bag of tricks to rank first in Google. This is 2015, not 2000. As I explain in a personal essay of mine and whenever I speak at digital marketing conferences, here is the definition of "SEO":

SEO is helping search engines to crawl, parse, index, and then display your website in organic search results for desired, relevant keywords and search queries.

Server log files - in addition to items including XML sitemaps, schema markup, website hierarchy, internal linking practices, meta tags, mobile-responsive design, and site speed - must be examined and addressed when needed to do exactly that.

How to Examine Server Log Files
DevOps engineers have traditionally used proprietary software to analyze the logs of their systems, networks, servers, and applications. However, the open-source ELK Stack - Elasticsearch, Logstash, and Kibana - has become extremely popular and is now used by companies including Netflix, LinkedIn, Facebook, Microsoft, and Cisco. (We use the ELK Stack to monitor our own environment, and to help the DevOps community, our CEO, Tomer Levy, has written a guide to deploying the platform.)

Regardless of how you choose to analyze your server log files, marketers and DevOps engineers will need to work together to use the data to solve certain problems. Here is a partial list of them (with examples from our own web server using one of our analytical dashboards).

What DevOps Engineers Need to See
Server Bot Crawl Volume

The number of requests made by search engine crawlers is important to know. If the marketing and sales teams want website content to be included in search results in Yandex in Russia but the search engine is not crawling your website, that is a significant issue. (In response, you'd want to see the Yandex Webmaster documentation and this reference article on Search Engine Land.)

Response code errors

For those who might need a refresher, the popular SEO software company Moz has a great guide on the meanings behind different status codes. I have a Logz.io alert system setup to tell me when 4XX and 5XX errors are found because those are significant in both marketing and IT contexts.

Temporary redirects

302 redirects, which are used when a URL is redirected only for a temporary period of time, do not pass what SEOs call "link juice" to the new URL. (The more and better the links that point to a given web page, the better the chance that it will rank highly in search engines.) It's better to use 301 redirects (permanent redirects) instead.

Crawl budget waste & duplicate crawling

Google assigns a crawl budget to every website based on a lot of different factors -- if a website's budget is, say, 1 GB of page data per day, then it is crucial to ensure that the 1 GB consists only of pages that the company wants to appear in public search results.

Even though technical SEOs and DevOps engineers can block (all or any) search engines in robots.txt files and meta-robots tags, Google might still be crawling advertising landing pages, internal scripts, web pages with sensitive information, and more. Log files will list every URL that is being crawled by search engines -- despite what you may have instructed them not to access.

If you hit your given crawl limit but still have new materials on your website that you want to be found in search results. Google might leave your website before indexing it. Duplicate URL crawling -- often through the addition of URL parameters in the tracking of marketing campaigns -- is one of the most common causes of crawl waste.

To fix this issue, I would see the guides on Google and Search Engine Land here, here, here, and here.

Crawl priority

Google might have deemed an important part of your website to be not worthy of being crawled too often. The log files will show what individuals and subdirectories as a whole are crawled most and least often. If this is the case, you can change the crawl-priority settings in your XML sitemaps to tell Google that a given part of your site is updated enough that it deserves to be crawled more frequently.

Last crawl date

Have you added something to your website that you need Google to index as soon as possible? The log files contain the data that when tell you when a URL was last crawled by a given search engine.

Crawl budget

The instances of Google crawling our website is one important thing that I personally like to check because the overall crawl volume is a rough proxy for how much the search engine "likes" your site. After all, Google does not want to waste resources on poor websites.

From Negative to Positive
DevOps used to be all about problems - engineers, after all, have always monitored platform performance, fixed cluster disconnects, and taken care of similar issues. If an employee at a company knew the operations person, it meant that the employee had a lot of problems.

Today, DevOps has the opportunity to be more visible and help in a positive way by becoming the information driver within an organization. DevOps engineers now provide the data that supports countless business decisions - and marketing is just another area in which they can help.

Logz.io makes log data meaningful by offering ELK, the world's most popular open-source log analytics platform, as a service with features including alerts, role-based access, and unlimited scalability. If you are interested in the technical SEO dashboards in this article (and other dashboards for DevOps purposes), you can get more information here.

More Stories By Samuel Scott

Samuel Scott is Director of Marcom for log analytics software platform Logz.io. Follow him and Logz.io on Twitter.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. DevOpsSUMMIT at CloudEXPO expands the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike.
The term "digital transformation" (DX) is being used by everyone for just about any company initiative that involves technology, the web, ecommerce, software, or even customer experience. While the term has certainly turned into a buzzword with a lot of hype, the transition to a more connected, digital world is real and comes with real challenges. In his opening keynote, Four Essentials To Become DX Hero Status Now, Jonathan Hoppe, Co-Founder and CTO of Total Uptime Technologies, shared that beyond the hype, digital transformation initiatives are infusing IT budgets with critical investment for technology. This is shifting the IT organization from a cost center/center of efficiency to one that is strategic for revenue growth. CIOs are working with the new reality of cloud, mobile-first, and digital initiatives across all areas of their businesses. What's more, top IT talent wants to w...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
Alan Hase is Vice President of Engineering and Chief Development Officer at Big Switch. Alan has more than 20 years of experience in the networking industry and leading global engineering teams which have delivered industry leading innovation in high end routing, security, fabric and wireless technologies. Alan joined Big Switch from Extreme Networks where he was responsible for product strategy for its secure campus switching, intelligent mobility and campus orchestration products. Prior to Extreme Networks, Alan was the Vice President of Avaya's Intelligent Edge engineering and product management teams. Alan spent 15 years at Cisco where he held various leadership roles. Alan joined Cisco in 1996 to lead its High-End Router software engineering team. In 2001, Alan became a Director of Engineering, responsible for Cisco's IPsec VPN product development and strategic direction. In 2006, A...