One of the latest projects I worked on was an apartment listing website. The main search engine used to query for properties on different cities was SOLR and the product owner requested to build some type of analytic tool to dissect user searches, get top searched cities, etc.
So my first reaction was to somehow read SOLR logs that prints the queries, parse each line in a proper way and then store it on a new SOLR collection. Implementing the previous sentence from the scratch would required at least a few days of work. After some investigation I found out a nice integration software called Solr Log Manager.
Basically is a bridge between Logstash and SOLR.
Logstash is a data pipeline that helps you process logs and other event data from a variety of systems, allows you to manage events and logs, so you can use it to collect, parse and store logs for later use.
To setup Solr Log Manager reading the Readme.md file is pretty straight forward and simple to follow, besides you will find Manual.md with extra information.
Configuring lw_solr.conf file
After you setup Solr Log Manager you will have to customize lw_solr.conf file to fit your needs. Below will show the one that I used on the project and will describe the important parts. Many of them are intuitive, you can read the official Logstash documentation to get more information.
type => "solrlog" specifies the log type that will be later referenced on the filter section.
path => [ "/opt/solr/logs/*" ] location of solr logs.
patterns_dir => "./patterns" directory where I defined some useful custom patterns to be used on match section.
match => ["message", "INFO %{DATA} %{TIMESTAMP_ISO8601:received_at}; %{DATA}; \[%{DATA:collection}\] webapp=%{DATA:webapp} path=%{DATA:search_handler} params={%{DATA}%{SORT:sort}%{DATA}%{QUERY_TERMS:query_terms}%{DATA}%{FILTER_QUERY_TERMS:filter_query_terms}%{DATA}} hits=%{BASE10NUM:hits} status=%{BASE10NUM:status} QTime=%{BASE10NUM:qtime}"]
is a regular expression that matches and encapsulate each log line section in its own field (defined after colon character).
For example, webapp=%{DATA:webapp} specifies that everything after webapp= and before path= characters should be matched against gork DATA pattern and will be stored in received_at field if so.
Below there are some custom patterns that I defined and placed it on a file inside ./patterns directory
Filters are applied in sequence from top to bottom, mutate.gsub is useful to remove unwanted characters and normalize data after matching. As you can see the already matched fields (that stores each portion of the log line) are referred on gsub.
mutate.add_field is used to add extra fields, in this case city and state, both will be filled with query_terms field data. I performed this trick to separate city and state info and keep query_terms field intact. Then I applied some transformations to get city and state clean data.
lucidworks_solr_lsv133 contains the info to reach solr instance and collection that will be fed.
Here is a solr log line example which will match above configuration: