Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White, Liz McMillan

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, Agile Computing, @CloudExpo

@DevOpsSummit: Article

DevOps to Drive Business | @DevOpsSummit @Logzio #DevOps #Microservices

Marketers and DevOps engineers will need to work together to use the data to solve certain problems

DevOps Is Changing from Solving Problems to Driving Business

DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics.

Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here's why: Server log files contain the only data that is completely full and accurate in the context of how search engines such as Google are crawling websites.

If a search engine spider encounters an error and does not load a page, the webmaster does not know because traditional traffic analytics tools such as Google Analytics do not track those issues. Log file data, on the other hand, does reveals what problems bots are encountering on a website - and many of those issues can hurt a site's appearance and rankings in Google.

Too many response code errors can lead Google to cut the rate at which it crawls your company's website. You want to monitor and confirm that search engines are crawling everything that you want to appear in public search results (everything else should be blocking search engine bots). When pages are assigned new URLs, it's important that the redirection will refer incoming links appropriately.

What Is SEO?
Contrary to what too many charlatans still proclaim (and unfortunately too many people still believe), "SEO" is not a bag of tricks to rank first in Google. This is 2015, not 2000. As I explain in a personal essay of mine and whenever I speak at digital marketing conferences, here is the definition of "SEO":

SEO is helping search engines to crawl, parse, index, and then display your website in organic search results for desired, relevant keywords and search queries.

Server log files - in addition to items including XML sitemaps, schema markup, website hierarchy, internal linking practices, meta tags, mobile-responsive design, and site speed - must be examined and addressed when needed to do exactly that.

How to Examine Server Log Files
DevOps engineers have traditionally used proprietary software to analyze the logs of their systems, networks, servers, and applications. However, the open-source ELK Stack - Elasticsearch, Logstash, and Kibana - has become extremely popular and is now used by companies including Netflix, LinkedIn, Facebook, Microsoft, and Cisco. (We use the ELK Stack to monitor our own environment, and to help the DevOps community, our CEO, Tomer Levy, has written a guide to deploying the platform.)

Regardless of how you choose to analyze your server log files, marketers and DevOps engineers will need to work together to use the data to solve certain problems. Here is a partial list of them (with examples from our own web server using one of our analytical dashboards).

What DevOps Engineers Need to See
Server Bot Crawl Volume

The number of requests made by search engine crawlers is important to know. If the marketing and sales teams want website content to be included in search results in Yandex in Russia but the search engine is not crawling your website, that is a significant issue. (In response, you'd want to see the Yandex Webmaster documentation and this reference article on Search Engine Land.)

Response code errors

For those who might need a refresher, the popular SEO software company Moz has a great guide on the meanings behind different status codes. I have a Logz.io alert system setup to tell me when 4XX and 5XX errors are found because those are significant in both marketing and IT contexts.

Temporary redirects

302 redirects, which are used when a URL is redirected only for a temporary period of time, do not pass what SEOs call "link juice" to the new URL. (The more and better the links that point to a given web page, the better the chance that it will rank highly in search engines.) It's better to use 301 redirects (permanent redirects) instead.

Crawl budget waste & duplicate crawling

Google assigns a crawl budget to every website based on a lot of different factors -- if a website's budget is, say, 1 GB of page data per day, then it is crucial to ensure that the 1 GB consists only of pages that the company wants to appear in public search results.

Even though technical SEOs and DevOps engineers can block (all or any) search engines in robots.txt files and meta-robots tags, Google might still be crawling advertising landing pages, internal scripts, web pages with sensitive information, and more. Log files will list every URL that is being crawled by search engines -- despite what you may have instructed them not to access.

If you hit your given crawl limit but still have new materials on your website that you want to be found in search results. Google might leave your website before indexing it. Duplicate URL crawling -- often through the addition of URL parameters in the tracking of marketing campaigns -- is one of the most common causes of crawl waste.

To fix this issue, I would see the guides on Google and Search Engine Land here, here, here, and here.

Crawl priority

Google might have deemed an important part of your website to be not worthy of being crawled too often. The log files will show what individuals and subdirectories as a whole are crawled most and least often. If this is the case, you can change the crawl-priority settings in your XML sitemaps to tell Google that a given part of your site is updated enough that it deserves to be crawled more frequently.

Last crawl date

Have you added something to your website that you need Google to index as soon as possible? The log files contain the data that when tell you when a URL was last crawled by a given search engine.

Crawl budget

The instances of Google crawling our website is one important thing that I personally like to check because the overall crawl volume is a rough proxy for how much the search engine "likes" your site. After all, Google does not want to waste resources on poor websites.

From Negative to Positive
DevOps used to be all about problems - engineers, after all, have always monitored platform performance, fixed cluster disconnects, and taken care of similar issues. If an employee at a company knew the operations person, it meant that the employee had a lot of problems.

Today, DevOps has the opportunity to be more visible and help in a positive way by becoming the information driver within an organization. DevOps engineers now provide the data that supports countless business decisions - and marketing is just another area in which they can help.

Logz.io makes log data meaningful by offering ELK, the world's most popular open-source log analytics platform, as a service with features including alerts, role-based access, and unlimited scalability. If you are interested in the technical SEO dashboards in this article (and other dashboards for DevOps purposes), you can get more information here.

More Stories By Samuel Scott

Samuel Scott is Director of Marcom for log analytics software platform Logz.io. Follow him and Logz.io on Twitter.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
DevOpsSUMMIT at CloudEXPO, to be held June 25-26, 2019 at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 20% faster time-to-market, 22% improvement in quality, and 18% reduction in dev and ops costs, according to research firm Vanson-Bourne. It is changing the way IT works, how businesses interact with customers, and how organizations are buying, building, and delivering software.
The benefits of automated cloud deployments for speed, reliability and security are undeniable. The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of virtual or physical machines. In this talk, we'll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platform. Then we'll show how automate deploying these images quickly and reliability with open DevOps tools like Terraform and Digital Rebar. Not only is this approach fast, it's also more secure and robust for operators. If you are running infrastructure, this talk will change how you think about your job in profound ways.
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a multi-faceted approach of strategy and enterprise business development. Andrew graduated from Loyola University in Maryland and University of Auckland with degrees in economics and international finance.
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.