Severalnines Blog
The automation and management blog for open source databases

Severalnines blog

Filter by:
Clear
Apply (1) filters
35 blog posts in 1 category

Become a MongoDB DBA: Monitoring and Trending (part 1)

This is our third post in the “Become a MongoDB DBA” blog series. It will give a primer in monitoring MongoDB: how to ship metrics using free open source tools.

Webinar: Become a MongoDB DBA - What to Monitor (if you’re really a MySQLer)

In this second webinar of the ‘Become a MongoDB DBA’ series, we will show you how…

Posted in:

Become a MongoDB DBA: The Basics of Configuration

This is our second post in the “Become a MongoDB DBA” blog series. It covers configuration of MongoDB, especially around ReplicaSet, security, authorization, SSL and HTTP / REST api.

Become a MongoDB DBA: provisioning and deployment

This post is the first in the “Become a MongoDB DBA” blog series and will cover provisioning and deployment.

Watch the replay: Become a MongoDB DBA (if you’re really a MySQL user)

These are replay details for our new webinar series: ‘How to Become a MongoDB DBA’.

Posted in:

High Availability Log Processing with Graylog, MongoDB and ElasticSearch

This blog post discusses how to deploy a Graylog cluster, with a MongoDB Replica Set deployed using ClusterControl.

How to Configure Drupal with MongoDB Replica Set

Drupal’s modular setup allows for different datastores to be integrated as modules, this allows sites to store different types of Drupal data into MongoDB. You can choose to store Drupal’s cache, session, watchdog, block information, queue and field storage data in either a standalone MongoDB instance or in a MongoDB Replica Set in conjunction with MySQL as the default datastore. If you’re looking at clustering your entire Drupal setup, then see this blog on how to cluster MySQL and the file system.

Posted in:

Big Data Integration & ETL - Moving Live Clickstream Data from MongoDB to Hadoop for Analytics

MongoDB is great at storing clickstream data, but using it to analyze millions of documents can be challenging. Hadoop provides a way of processing and analyzing data at large scale. Since it is a parallel system, workloads can be split on multiple nodes and computations on large datasets can be done in relatively short timeframes. MongoDB data can be moved into Hadoop using ETL tools like Talend or Pentaho Data Integration (Kettle).

Posted in:

Installing ClusterControl on Existing MongoDB Replica Set using bootstrap script

So, your development project has been humming along nicely on MongoDB, until it was time to deploy the application. That's when you called your operations person and things got uncomfortable. NoSQL, document database, collections, replica sets, sharding, config servers, query servers,... What the hell's going on here?

Posted in:

OpenStack Metering: How to Install Ceilometer with MongoDB

According to Wikipedia, a ceilometer is a device that uses a laser or other light source to determine the height of a cloud base. And it is also the name of the framework for monitoring and metering OpenStack. It collects measurements within OpenStack so that no two agents would need to be written to collect the same data. 

Posted in: