Severalnines Blog
The automation and management blog for open source databases

Severalnines blog

Filter by:
Clear
Apply (1) filters
5 blog posts in 13 categories

A Review of the New Analytic Window Functions in MySQL 8.0

This blog provides an overview of MySQL window functions. A window function performs an aggregate-like operation on a set of query rows. However, whereas an aggregate operation groups query rows into a single result row, a window function produces a result for each query row.

Hybrid OLTP/Analytics Database Workloads: Replicating MySQL Data to ClickHouse

Columnar data stores provide much better performance for analytics queries than regular relational databases like MySQL. ClickHouse is an example of such datastore, queries that take minutes to execute in MySQL would take less than a second instead. In this blog post we show how to tackle the challenge of replicating the data from MySQL to ClickHouse.

Analytics with MariaDB AX - the Open Source Columnar Datastore

What is a columnar datastore? When is it a better fit over a relational datastore? In this blog post, we will take a look at MariaDB AX and how it fits in this landscape.

Big Data Integration & ETL - Moving Live Clickstream Data from MongoDB to Hadoop for Analytics

MongoDB is great at storing clickstream data, but using it to analyze millions of documents can be challenging. Hadoop provides a way of processing and analyzing data at large scale. Since it is a parallel system, workloads can be split on multiple nodes and computations on large datasets can be done in relatively short timeframes. MongoDB data can be moved into Hadoop using ETL tools like Talend or Pentaho Data Integration (Kettle).

Archival and Analytics - Importing MySQL data into Hadoop Cluster using Sqoop

We won’t bore you with buzzwords like volume, velocity and variety. This post is for MySQL users who want to get their hands dirty with Hadoop, so roll up your sleeves and prepare for work. Why would you ever want to move MySQL data into Hadoop? One good reason is archival and analytics. You might not want to delete old data, but rather move it into Hadoop and make it available for further analysis at a later stage.