Image for Data Engineering with Databricks Cookbook : Hands-on recipes for building effective solutions using Apache Spark, Databricks, and Delta Lake

Data Engineering with Databricks Cookbook : Hands-on recipes for building effective solutions using Apache Spark, Databricks, and Delta Lake

See all formats and editions

70 recipes to learn how to implement reliable data pipelines with Apache Spark, optimally store and process structured and unstructured data in Delta Lake, and use Databricks to orchestrate and govern your data. Key FeaturesLearn data ingestion, data transformation, and data management techniques using Apache Spark and Delta LakeGain practical guidance on using Delta Lake tables and orchestrating data pipelinesImplement reliable DataOps and DevOps practices, and enforce data governance policies on DatabricksBook DescriptionApache Spark is a powerful open source distributed computing system that enables fast and flexible data processing and Delta Lake is an open-source storage layer that provides reliability, performance, and scale for data lakes. Data Engineering with Databricks Cookbook will show you recipes for effectively using Apache Spark, Delta Lake, and Databricks for data engineering, beginning with an introduction to data ingestion and loading with Apache Spark.

You will be introduced to various data manipulation and data transformation solutions that can be applied to data.

You'll discover how to manage and optimize Delta tables, as well as how to ingest and process streaming data.

You'll learn how to improve the performance problems of Apache Spark apps and Delta Lake.

Later chapters will teach you how to use Databricks to implement DataOps and DevOps practices.

You'll then learn how to orchestrate and schedule data pipelines using Databricks Workflows.

Finally, you will go over how to set up and configure Unity Catalog for data governance. By the end of this book, you’ll learn how to build reliable data pipelines with modern data engineering technologies as well as have a comprehensive understanding of how to build efficient and scalable data pipelines.What you will learnPerform data loading, ingestion, and processing with Apache SparkDiscover data transformation techniques and custom UDFs in Apache SparkManage and optimize Delta tables with Apache Spark and Delta Lake APIsUse Spark Structured Streaming for real-time data processingOptimize Apache Spark application and Delta table query performanceImplement DataOps and DevOps practices on DatabricksOrchestrate data pipelines with Delta Live Tables and Databricks WorkflowsImplement data governance policies with Unity CatalogWho this book is forThis book is for data engineers, data scientists, and data practitioners who want to learn how to build efficient and scalable data pipelines using Apache Spark, Delta Lake, and Databricks.

To get the most out of this book, you should have basic knowledge of data architecture, SQL, and Python.

Read More
Special order line: only available to educational & business accounts. Sign In
£29.99
Product Details
Packt Publishing Limited
1837632065 / 9781837632060
Digital (delivered electronically)
31/05/2024
United Kingdom
454 pages