Severalnines Blog
The automation and management blog for open source databases

Severalnines blog

Filter by:
Clear
Apply (1) filters
123 blog posts in 13 categories

How to migrate ClusterControl to a new server

As your needs change, and you start managing more database instances or larger centralized backups with ClusterControl, you might find that your controller host is over-utilized. Or you might need to migrate your infrastructure to the cloud. If this is the case, you can migrate to another instance.

Press Release: Severalnines primed for even stronger customer growth

Severalnines, a database clustering pioneer which helps businesses automate and manage highly available open source SQL and NoSQL database clusters, reported over 100 percent sales growth last year.

Posted in:

Database Automation - Private DBaaS for MySQL, MariaDB and MongoDB with ClusterControl

Installing, configuring, deploying databases and performing repetitive administrative tasks are all part of a DBA’s or sysadmin’s job. This can get pretty repetitive and overwhelming if you are part of a centralized IT team, running multiple databases for your organization’s different departments, or a managed hosting provider responsible for setting up and operating databases for external clients. One way to get out of this ‘manual, repetitive task’ business is through a Database as a Service (DBaaS).

Big Data Integration & ETL - Moving Live Clickstream Data from MongoDB to Hadoop for Analytics

MongoDB is great at storing clickstream data, but using it to analyze millions of documents can be challenging. Hadoop provides a way of processing and analyzing data at large scale. Since it is a parallel system, workloads can be split on multiple nodes and computations on large datasets can be done in relatively short timeframes. MongoDB data can be moved into Hadoop using ETL tools like Talend or Pentaho Data Integration (Kettle).

Webinar Replay & Slides: Management & Automation of MongoDB Clusters

Thanks to everyone who attended last week’s joint webinar with the Tokutek Team; if you missed the sessions or would like to watch the webinar again & browse through the slides, they are now available online.

Posted in:

Webinar Replay & Slides: Repair & Recovery for Your MySQL, MariaDB & MongoDB / TokuMX Clusters

Thanks to everyone who attended this week’s webinar; if you missed the sessions or would like to watch the webinar again and browse through the slides, they are now available online.

Posted in:

Installing ClusterControl on Existing MongoDB Replica Set using bootstrap script

So, your development project has been humming along nicely on MongoDB, until it was time to deploy the application. That's when you called your operations person and things got uncomfortable. NoSQL, document database, collections, replica sets, sharding, config servers, query servers,... What the hell's going on here?

OpenStack Metering: How to Install Ceilometer with MongoDB

According to Wikipedia, a ceilometer is a device that uses a laser or other light source to determine the height of a cloud base. And it is also the name of the framework for monitoring and metering OpenStack. It collects measurements within OpenStack so that no two agents would need to be written to collect the same data.

NoSQL Battle of the East Coast - Benchmarking MongoDB vs TokuMX Cluster

In this post we will compare performance of MongoDB and TokuMX, a MongoDB performance engine from Tokutek. We will conduct three simple experiments that (almost) anyone without any programming skills can try and reproduce. In this way, we’ll be able to see how both products behave.

MongoDB Tutorial - On-premises Cluster Management and Monitoring of MongoDB Replica Set

Replica Sets in MongoDB are very useful. They provide multiple copies of data, automated failover and read scalability. A Replica Set can consist of up to 12 nodes, with only one primary node (or master node) able to accept writes. In case of primary node failure, a new primary is auto-elected.Replica Sets in MongoDB are very useful. They provide multiple copies of data, automated failover and read scalability. A Replica Set can consist of up to 12 nodes, with only one primary node (or master node) able to accept writes. In case of primary node failure, a new primary is auto-elected.