I always encourage companies to break down their Big Data projects into smaller pieces. I call this process crawl, walk, run.
There is an interesting corollary to this process. Some companies get stuck at the crawl phase and don’t progress on to the walk and run phases. The first time I saw this, I was so intrigued. How could a company stop? Why would they stop when there’s so much more they could do?
Stopped at Crawl
You’re probably wondering what it looks like when your Big Data project stops at crawl.
It looks like you’ve cloned your data warehouse in Hadoop. No more work has gone into improving or using new technologies in the data pipeline.
That goes to a common question I get asked. Is Hadoop a data warehouse? My answer is yes, Hadoop can be used as a data warehouse, but stopping at data warehousing is a terrible waste. Hadoop and its ecosystem can do so much more than a data warehouse can.
What’s the Source
The source of the problem is having the wrong team or members of the team tasked with the Big Data transition. The common misconception is that a Data Engineer is the same thing as a DBA.
The two positions are very different. A DBA has a place on the data engineering team, but having a team of just DBAs leads to being stuck at crawling. Creating a Big Data pipeline requires Java skills.
The crawling phase of moving data out of a RDBMS and placing it in Hadoop is easy, that’s why I call it the crawling phase. There is so much more that can be done with Hadoop. However, they can be done with just SQL skills. You will need qualified data engineers who can create the complex data pipelines.