Welcome!

@DevOpsSummit Authors: Carmen Gonzalez, Mehdi Daoudi, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, @CloudExpo, Apache

@DevOpsSummit: Blog Feed Post

Solr vs. Elasticsearch — How to Decide? By @Sematext | @DevOpsSummit [#DevOps]

Which one is better, Solr or Elasticsearch? Which one is faster? Which one scales better?

Solr vs. Elasticsearch — How to Decide?

By Otis Gospodnetić

[Otis is a Lucene, Solr, and Elasticsearch expert and co-author of “Lucene in Action” (1st and 2nd editions).  He is also the founder and CEO of Sematext. See full bio below.]

“Solr or Elasticsearch?”…well, at least that is the common question I hear from Sematext’s consulting services clients and prospects.  Which one is better, Solr or Elasticsearch?  Which one is faster?  Which one scales better?  Which one can do X, and Y, and Z?  Which one is easier to manage?  Which one should we use?  Which one do you recommend? etc., etc.

These are all great questions, though not always with clear and definite, universally applicable answers. So which one do we recommend you use? How do you choose in the end?  Well, let me share how I see Solr and Elasticsearch past, present, and future, let’s do a bit of comparing and contrasting, and hopefully help you make the right choice for your particular needs.

Early Days: Youth vs. Experience

Apache Solr is a mature project with a large and active development and user community behind it, as well as the Apache brand.  First released to open-source in 2006, Solr has long dominated the search engine space and was the go-to engine for anyone needing search functionality.  Its maturity translates to rich functionality beyond vanilla text indexing and searching; such as faceting, grouping (aka field collapsing), powerful filtering, pluggable document processing, pluggable search chain components, language detection, etc.

Solr dominated the search scene for several years.  Then, around 2010, Elasticsearch appeared as another option on the market.  Back then it was nowhere near as stable as Solr, did not have Solr’s feature depth, did not have the mindshare, brand, and so on.  But it had a few other things going for it: Elasticsearch was young and built on more modern principles, aimed at more modern use cases, and was built to make handling of large indices and high query rates easier.  Moreover, because it was so young and without a community to work with, it had the freedom to move forward in leaps and bounds, without requiring any sort of consensus or cooperation with others (users or developers), backwards compatibility, or anything else that more mature software typically has to handle.  As such it exposed certain highly sought-after functionality (e.g., Near Real-Time Search) before Solr did.  Technically speaking, the ability to have NRT Search really came from Lucene, the underlying search library to both Solr and Elasticsearch use.  The irony is that because Elasticsearch exposed NRT Search first, people associated NRT Search with Elasticsearch, even though Solr and Lucene are both part of the same Apache project and, as such, one would expect Solr to have such highly demanded functionality first.

Elasticsearch, being more modern, appealed to several groups of people and organizations:

  • those who didn’t yet have a search engine and hadn’t invested a lot of time, money, and energy in its adoption, integration, etc.
  • those who had to deal with large volumes of data and needed to more easily shard and replicate data (search indices) and shrink or grow their search cluster

Of course, let’s admit it, there will always be those who like jumping on new shiny objects, too.

Evening the Search Playing Field

Fast forward to 2014 and now 2015.  Elasticsearch is no longer new, but it’s still shiny.  It closed the feature gap with Solr and, in some cases, surpassed it.  It certainly has more buzz around it.  At this point both projects are very mature.  Both have lots of features.  Both are stable.  I have to say though, that I do see more Elasticsearch clusters with issues, but I think that is primarily because of a few reasons:

  • Elasticsearch, traditionally being easier to get started with, made it possible for anyone to start using it out of the box, without too much understanding of how things work.  That’s great to get started, but dangerous when data/cluster grows.
  • Elasticsearch, lending itself to easier scaling, attracts use cases demanding larger clusters with more data and more nodes.
  • Elasticsearch is more dynamic – data can easily move around the cluster as its nodes come and go, and this can impact stability and performance of the cluster.
  • While Solr has traditionally been more geared toward text search, Elasticsearch is aiming to handle analytical types of queries, too, and such queries come at a price.

Although this may sound scary, let me put it this way — Elasticsearch exposes a ton of control knobs one can play with to control the beast.  Of course, the key bit is that one has to be aware of all possible knobs, know what they do, and make use of that.  For example, despite what you just read about Elasticsearch, we rely on it in our organization for several different products, even though we know Solr just as well as we know Elasticsearch.

Solr: Not Totally Eclipsed

What about Solr?  Solr hasn’t exactly stood still.  The appearance of Elasticsearch was actually great for Solr and its community of developers and users.  Despite being almost 10 years old, Solr development is going faster than ever.  It, too, has a friendly API now.  It, too, has the ability to more easily grow and shrink clusters, create indices more dynamically, shard them on the fly, route documents and queries, etc., etc. Note: when people refer to SolrCloud they specifically mean this form of very distributed, Elasticsearch-like Solr deployment.

I recently attended a Lucene/Solr Revolution conference in Washington, DC and was pleasantly surprised by what I saw: a strong community, healthy project, lots of big name companies not only using Solr, but investing in it through adoption, contribution through development/engineering time, etc.  If you follow just the news you’d be led to believe Solr is dead and everyone is just flocking to Elasticsearch.  That is actually not the case.  Elasticsearch being newer, is naturally more interesting to write about.  Solr was news 5+ years ago.  And of course there were some people going from Solr to Elasticsearch when Elasticsearch appeared — in the beginning there were simply no Elasticsearch users.

So which is better?  Which one should you use?  Where do Solr and Elasticsearch differ?  What does the future hold?

Here are some other things you should keep in mind:

  • Both are released under the Apache Software License
  • Solr is truly open-source — community over code.  Anyone can contribute to Solr and new Solr developers (aka committers) are elected based on merit.  Elasticsearch is technically open-source, but less so in spirit.  Anyone can see the source, anyone can change it and offer a contribution, but only employees of Elasticsearch can actually make changes to Elasticsearch.
  • Solr contributors and committers come from a number of different organizations, while Elasticsearch committers are from a single company.
  • A number of organizations have chosen Solr over Elasticsearch as their horses in the search race (e.g. Cloudera, Hortonworks, MapR, etc.) even though they’ve also partnered with Elasticsearch.
  • Both Solr and Elasticsearch have lively user and developer communities and are rapidly being developed.
  • If you need to add certain missing functionality to either Solr or Elasticsearch, you may have more luck with Solr.  True, there are ancient Solr JIRA issues that are still open, but at least they are still open and not closed.  In Solr world the community has a bit more say even though at the end of the day it’s one of the Solr developers who has to accept and handle the contribution.
  • Both have good commercial support (consulting, production support, integration, etc.)
  • Both have good operational tools around it, although Elasticsearch has, because of its easier-to-work-with API, attracted the DevOps crowd a lot more, thus enabling a livelier ecosystem of tools around it.
  • Elasticsearch dominates the open-source log management use case — lots of organizations index their logs in Elasticsearch to make them searchable.  While Solr can now be used for this, too (see Solr for Indexing and Searching Logs and Tuning Solr for Logs), it just missed the mindshare boat on this one.
  • Solr is still much more text-search-oriented.  On the other hand, Elasticsearch is often for filtering and grouping – the analytical query workload – and not necessarily text search.  Elasticsearch developers are putting a lot of effort into making such queries more efficient (lowering of the memory footprint and CPU usage) at both Lucene and Elasticsearch level.  As such, at this point in time, Elasticsearch is a better choice for applications that need to do not just text search, but also complex search-time aggregations.
  • Elasticsearch is a bit easier to get started – a single download and a single command to get everything started.  Solr has traditionally required a bit more work and knowledge, but Solr has recently made great strides to eliminate this and now just has to work on changing its reputation.
  • Performance-wise, they are roughly the same.  I say “roughly”, because nobody has ever done comprehensive and non-biased benchmarks.  For 95% of use cases either choice will be just fine in terms of performance, and the remaining 5% need to test both solutions with their particular data and their particular access patterns.
  • Operationally speaking, Elasticsearch is a bit simpler to work with – it has just a single process.  Solr, in its Elasticsearch-like fully distributed deployment mode known as SolrCloud, depends on Apache ZooKeeper.  ZooKeeper is super mature, super widely used, etc. etc., but it’s still another moving part.  That said, if you are using Hadoop, HBase, Spark, Kafka, or a number of other newer distributed software, you are likely already running ZooKeeper somewhere in your organization.
  • While Elasticsearch has built-in ZooKeeper-like component called Xen, ZooKeeper is better at preventing the dreaded split-brain problem sometimes seen in Elasticsearch clusters.  To be fair, Elasticsearch developers are aware of this problem and are working on improving this aspect of Elasticsearch.
  • If you love monitoring and metrics, with Elasticsearch you’ll be in heaven.  The thing has more metrics than people you can squeeze in Times Square on New Year’s Eve!  Solr exposes the key metrics, but nowhere near as many as Elasticsearch.  Regardless, having comprehensive monitoring and centralized logging tools like Sematext’s SPM Performance Monitoring and Logsene Log Management and Analytics — especially when they work seamlessly together like these two do — is essential if you want to have a handle on metrics and other operational data.

Here are a few charts to demonstrate what I mean:

Elasticsearch user mailing list traffic: 36,127 (source: Search-Lucene)

Solr user mailing list traffic is about two thirds of the Elasticsearch mailing list traffic.  Of course, this could be because there are more Elasticsearch users, or because there are more problems with Elasticsearch and users are in need of more help, or perhaps they are just a chattier bunch.

Elasticsearch vs. Solr Contributors (source: Open Hub)   click to enlarge

As you can see, Elasticsearch numbers are trending sharply upward, and now more than double Solr with regard to Commit activity.  This is not a very precise or absolutely correct way to compare open-source projects, but it gives us some data points.  For example, Elasticsearch is developed on Github, which makes it very easy to merge others’ Pull Requests, while Solr contributors tend to create patches, upload them to JIRA, where they get reviewed by Solr committers before being applied — a less streamlined process.  Moreover, Elasticsearch repository contains documentation, not just code, while Solr keeps its documentation in a Wiki.  This contributes to higher numbers for both commits and contributors for Elasticsearch.

Boil It Down For Me

In conclusion, here are the bits that I think make the most difference for anyone having to make a choice:

  • If you’ve already invested a lot of time in Solr, stick with it, unless there are specific use cases that it just doesn’t handle well.  If you think that is the case, speak to somebody close to both Solr and Elasticsearch projects to save you time, guessing, research, and avoid mistakes.
  • If you are a strong believer in true open-source, Solr is closer to that than Elasticsearch, and having one company control Elasticsearch may be a turn-off.
  • If you need a data store that can handle analytical queries in addition to text searching, Elasticsearch is a better choice for that today.

If you expected a single definitive winner, I’m sorry to disappoint.  We don’t have one here.  However, I hope this quick comparison of the two leading open-source search engines provides enough information and guidance to help you make the right choice for your organization

About the author: in addition to being a Lucene, Solr, and Elasticsearch expert and author, Otis Gospodnetić is the founder and CEO of Sematext. Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection of Solr, Elasticsearch, Hadoop, Spark and many other applications (SPM), log management and analytics (Logsene), site search analytics (SSA), and search enhancement. The company also provides Search and Big Data consulting services and offers 24/7 production support for Solr and Elasticsearch to clients worldwide.

[Note: the original version of this article appeared at Datanami.com

Read the original blog entry...

More Stories By Sematext Blog

Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), and search analytics (SSA). We also provide Search and Big Data consulting services and offer 24/7 production support for Solr and Elasticsearch.

@DevOpsSummit Stories
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker containers gain prominence. He explored these challenges and how to address them, while considering how containers will influence the direction of cloud computing.
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help global organizations open new revenue streams, increase efficiencies, improve customer experience and ensure rapid time to market in the digital age. Only Hitachi Data Systems powers the digital enterprise by integrating the best information technology and operational technology from across the Hitachi family of companies. We combine this experience with Hitachi expertise in the internet of things to d...
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing best practices around collaboration, monitoring, configuration and analytics that will help you boost experience and optimize utilization of your modern IT Infrastructures.
Quickly find the root cause of complex database problems slowing down your applications. Up to 88% of all application performance issues are related to the database. DPA’s unique response time analysis shows you exactly what needs fixing - in four clicks or less. Optimize performance anywhere. Database Performance Analyzer monitors on-premises, on VMware®, and in the Cloud, including Amazon® AWS and Azure™ virtual machines.
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session at @DevOpsSummit at 20th Cloud Expo, Vaughn Marshall, Sr. Principal Product Owner at CA Technologies, will discuss and demo how increasingly teams are developing with agile methodologies using modern development environments and automating testing and deployments, mobile to mainframe.
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyor belt between the Software Factory and production stages. Artifacts are moved from one stage to the next only after they have been tested and approved to continue. New code submitted to the repository is tested upon commit. When tests fail, the code is rejected. Subsystems are approved as part of periodic builds on their way to the delivery stage, where the system is being tested as production ready. The release process stops when tests fail. The key is to shift test ...
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant busin...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore the ways that Nutanix technologies empower teams to react faster than ever before and connect teams in ways that were either too complex or simply impossible with traditional infrastructures.
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Valley. The program, to be aired during the peak viewership season of the year, will have a major impact on the industry in Japan. The film's director is writing a scenario to fit in the story in the next few days will be turned in to the network.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value Stream Mapping for the development and operations tool chain by offering DevOps Tool Chain Integration and Traceability; DevOps Tool Chain Orchestration; and DevOps Insight and Intelligence. CollabNet also offers traditional application lifecycle management, ALM, for the enterprise through its TeamForge product.
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help global organizations open new revenue streams, increase efficiencies, improve customer experience and ensure rapid time to market in the digital age. Only Hitachi Data Systems powers the digital enterprise by integrating the best information technology and operational technology from across the Hitachi family of companies. We combine this experience with Hitachi expertise in the internet of things to d...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in compute, storage and networking technologies, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green®” initiative and provides customers with the most energy-efficient, environmentally friendly solutions available on the market.
SYS-CON Events announced today that Juniper Networks (NYSE: JNPR), an industry leader in automated, scalable and secure networks, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Juniper Networks challenges the status quo with products, solutions and services that transform the economics of networking. The company co-innovates with customers and partners to deliver automated, scalable and secure networks with agility, performance and value.
Developers want to create better apps faster. Static clouds are giving way to scalable systems, with dynamic resource allocation and application monitoring. You won't hear that chant from users on any picket line, but helping developers to create better apps faster is the mission of Lee Atchison, principal cloud architect and advocate at New Relic Inc., based in San Francisco. His singular job is to understand and drive the industry in the areas of cloud architecture, microservices, scalability and availability. In a keynote presentation, he spoke to a standing-room-only crowd at New York's Cloud Expo about how highly available, highly scalable systems can help developers attain the goal of better apps faster.
SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on quality and value.
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your environments.
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists will examine how DevOps helps to meet the demands of Digital Transformation – including accelerating application delivery, closing feedback loops, enabling multi-channel delivery, empowering collaborative decisions, improving user experience, and ultimately meeting (and exceeding) business goals.
Grape Up is a software company, specialized in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the USA and Europe, we work with a variety of customers from emerging startups to Fortune 1000 companies.
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 additional third-party data centers across Europe. Its full-service Unified ICT platform serves international enterprises and many of the world’s leading service providers, as well as governments and universities.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no idea how to get a proper answer.
Cloud Expo, Inc. has announced today that Aruna Ravichandran, vice president of DevOps Product and Solutions Marketing at CA Technologies, has been named co-conference chair of DevOps at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.