Originally published on April 12, 2022.
As companies move to Spark and a Lakehouse architecture, they are realizing that the data tools are lagging way behind. You need to be a programmer to effectively use Spark and Airflow.
There are some low-code ETL tools, but is that enough? Companies want to treat their data pipelines like mission-critical apps. They want DevOps for data, with GIT, test coverage, and CI/CD. In addition, companies need end-to-end visibility of their data pipelines, including monitoring, metadata, and column-level lineage.
Where are the data tools for these capabilities and how do non-programmers use these?
In today’s podcast, you’ll see how Prophecy’s low-code data engineering platform, built on Spark and Airflow, provides a complete solution for developing, deploying, and monitoring data pipelines. We’ll speak with Raj Bains, who is the founder & CEO of Prophecy.
Raj has spent two decades building powerful tools – Microsoft Visual Studio, founding engineer for CUDA at NVIDIA, product manager for Hive at Hortonworks. He is passionate about making all organizations productive with data and making the lives of data engineers better by reducing toil and increasing the joy.
Also, if you like what you hear, Prophecy has a special offer for our listeners. Go to Prophecy.IO/request-a-demo and put in “SED” for the referral code. If you decide to purchase within 3 months of the demo, you get 10% off. If you purchase after 3 months, you still get 5% off.
Sponsorship inquiries: email@example.com
This episode is from Data – Software Engineering Daily whose proprietor has full ownership and responsibility on its contents and artworks. It was shared using Castamatic, a podcast app for iPhone and iPad.