Databricks open sources Delta Lake for data lake reliability

Ali Ghodsi of Databricks

Databricks, a specialist in Unified Analytics and founded by the original creators of Apache Spark, has announced a new open source project called Delta Lake to deliver reliability to data lakes.

Delta Lake is the first production-ready open source technology to provide data lake reliability for both batch and streaming data. This new open source project will enable organisations to transform their existing messy data lakes into clean Delta Lakes with high quality data, thereby accelerating their data and Machine Learning initiatives.

While attractive as an initial sink for data, data lakes suffer from data reliability challenges. Unreliable data in data lakes prevents organisations from deriving business insights quickly and significantly slows down strategic Machine Learning initiatives. Data reliability challenges derive from failed writes, schema mismatches and data inconsistencies when mixing batch and streaming data, and supporting multiple writers and readers simultaneously.

“Today nearly every company has a data lake they are trying to gain insights from, but data lakes have proven to lack data reliability. Delta Lake has eliminated these challenges for hundreds of enterprises. By making Delta Lake open source, developers will be able to easily build reliable data lakes and turn them into ‘Delta Lakes’,” says Ali Ghodsi, co-founder and CEO at Databricks.

Delta Lake delivers reliability by managing transactions across streaming and batch data and across multiple simultaneous readers and writers. Delta Lakes can be easily plugged into any Apache Spark job as a data source, enabling organisations to gain data reliability with minimal change to their data architectures. With Delta Lake, organisations no longer need to spend resources building complex and fragile data pipelines to move data across systems. Instead, developers can have hundreds of applications reliably upload and query data at scale.

With Delta Lake, developers will be able to undertake local development and debugging on their laptops to quickly develop data pipelines. They will be able to access earlier versions of their data for audits, rollbacks or reproducing machine learning experiments. They will also be able to convert their existing Parquet, a commonly used data format to store large datasets, files to Delta Lakes in-place, thus avoiding the need for substantial reading and re-writing.

The Delta Lake project can be found at delta.io and is under the permissive Apache 2.0 license. This technology is deployed in production by organisations such as Viacom, Edmunds, Riot Games, and McGraw Hill.

“We’ve believed right from the onset that innovation happens in collaboration – not isolation. This belief led to the creation of the Spark project and MLflow. Delta Lake will foster a thriving community of developers collaborating to improve data lake reliability and accelerate machine learning initiatives,” adds Ghodsi.

Comment on this article below or via Twitter: @VanillaPlus OR @jcvplus

RECENT ARTICLES

The emerging role of satellites in expanding cellular networks

Posted on: April 25, 2024

Satellites are rapidly gaining prominence in the world of cellular communication. However, the full extent of their potential to complement terrestrial networks as well as phone services and broadband is

Read more

OSIA specification recognized as ITU-T international standard

Posted on: April 24, 2024

The Secure Identity Alliance (SIA) has announced that its OSIA specification is recognised as international standard by the International Telecommunication Union’s Telecommunication Standardization Sector (ITU-T). This milestone establishes OSIA as

Read more