This site uses cookies. To find out more, see our Cookies Policy

IT Manager – Data Engineering in Duluth, GA at Lucas Group

Date Posted: 7/20/2018

Job Snapshot

Job Description

This role will have an emphasis on data engineering practices and leading a team of BI and data warehouse engineers implementing data engineering technologies, tools and frameworks as part of the digital transformation to transition into cloud-based technologies.

How You Will Impact The Company:

  • Help execute on the new data warehouse 
  • Design a consistent, unified data architecture that will enable self-service reporting 
  • Collect and analyze large volumes of data advancing the efficiency of our state-of-the-art optimization, predictive analytics and BI self-servicing tools.
  • Lead a technical team of data engineers delivering a wide array of big data and self-service reporting solutions that use cutting edge technologies.
  • Guide the organization in efficient data and resource management best practices with cloud-based big data, data warehouse and reporting platforms and services.

What you need to succeed:

  • At least 10 years of general software development and data engineering experience.
  • At least 5 years of experience developing and leveraging big data (Hadoop ecosystem), integration, and reporting platforms and services.
  • At least 3 years as a technical team lead.
  • Cloud experience (AWS) is preferred.
  • Hands-on experience in the following domains: Data Storage, Data Ingestion, Batch Processing, Stream Processing and Real-Time Message Ingestion, and/or Analysis and Reporting and Analytical Data Stores.
  • Hands-on experience with Enterprise Data Warehouse initiatives including Data Model development, Semantic/Data Access Layer Development, Reporting and Dashboard Creation; developing solutions for shared data usage using cloud-based technologies in AWS (Redshift).
  • Experience developing complex reports, dashboards and scorecards technologies including but not limited to: PowerBI, Tableau, QlikSense, QlikView.
  • Education: Bachelor's Degree in Computer Science, or similar.
  • Domain Experience Details:
    • Data storage experience working with Polyglot Data Storage technologies in the cloud, Data Lake (HDFS, Blog storage), RDMS (Oracle, SQL, SQL DW, Redshift) and NoSQL solutions (Key Value, Document, Column and Graph).
    • The installation, configuration, administration and governess of these environments on-prem and the cloud.
  • Data Ingestion:
    • Experience in data processing operations (ETL/ELT) that is optimized and scalable,encapsulated in workflows hat transform source data, move data between
      multiple sources and sinks, load the processed data into an analytical data store using  cloud
      orchestration technology such as Azure Data Factory or Apache Oozie, Sqoop, AWS Data Pipeline, AWS Glue, Informatica
  • Batch Processing:
    • Experience in the Hadoop echo system, reading source files, processing them and writing the output to new files or data stores using Hive, Pig, or custom Map/Reduce jobs in an Hadoop cluster, or using Java, Scala, or Python programs in an Hadoop Spark cluster.
  • Stream Processing and Real-Time Message Ingestion:
    • Experience with real-time data sources and message ingestion for processing by filtering, aggregating, and preparing the data for analysis. Technologies such as Spark Streaming and Kafka, AWS Kinesis, Azure IOT or Event Hub etc.
  • Analysis and Reporting and Analytical Data Stores:
    • Experience developing solutions (complex reports, dashboards, and scorecards) that provide insights into the data through analysis and reporting with Enterprise Data Warehouse initiatives including Data Model development, Semantic/Data AccessLayer Development; developing solutions for shared data usage using cloud-based data store technologies such as AWS (Redshift), SSAS or NoSQL technology such as HBase, or Hive databases in a distributed data store. Experience with technologies including, but not limited to: PowerBI, Tableau, Qlik Sense, QlikView.