This site uses cookies. To find out more, see our Cookies Policy

Big Data Architect III in Broomfield, CO at Genesys Talent LLC

Date Posted: 12/5/2018

Job Snapshot

Job Description

Description:
Designs, develops, and implements infrastructure to provide highly-complex, reliable, and scalable database to meet the organization's objectives and requirements. Analyzes organization's business requirements for database design, and executes changes to database as required. 6-8 years experience

Position Overview:
As a Big Data (Hadoop) Architect, will be responsible for Cloudera Hadoop development, high-speed querying, managing and deploying Flume, HIVE and PIG, test prototypes and oversee handover to operational teams and propose best practices / standards. Expertise with Designing, building, installing, configuring and developing a Cloudera Hadoop echosystem.

Principal Duties and Responsibilities (Essential Functions**):
Work with cross functional consulting teams within the data science and analytics team to design, develop, and execute solutions to derive business insights and solve clients' operational and strategic problems. Build the platform using cutting-edge capabilities and emerging technologies, including the Data Lake and Cloudera data platform, which will be used by thousands of users. Work in a Scrum-based Agile team environment using Hadoop. Install and configure the Hadoop and HDFS environment using the Cloudera data platform. Create ETL and data ingest jobs using Map Reduce, Pig, or Hive. Work with and integrate multiple types of data, including unstructured, structured, and streaming.
Support the development of data science and analytics solutions and product that improve existing processes and decision making.
Build internal capabilities to better serve clients and demonstrate thought leadership in latest innovations in data science, big data, and advanced analytics.
Contribute to business and market development.
Specific skills and abilities:
•Strong computer science and programing background
•Deep experience in data modeling, EDW, Star, snowflake and other schemas and cubing technologies (OLAP)
•Ability to design and build data models, semantic layer to access data sets
•Ability to own a complete functional area - from analysis to design to development and complete support
•Ability to translate high-level business requirements into detailed design
•Build integration between data systems ( restful API, micro batch, streaming) using technologies ( e.g. Snaplogic – iPaaS, Spark SQL, HQL, Sqoop, Kafka, Pig and Strom)
•Hands on experience working the Cloudera Hadoop ecosystem and technologies
•Strong desire to learn a variety of technologies and processes with a 'can do' attitude
•Experience guiding and mentoring 5-8 developers on various tasks
•Aptitude to identify, create, and use best practices and reusable elements
•Ability to solve practical problems and deal with a variety of concrete variables in situations where only limited standardization exits

Qualifications & Skills:

Bachelor’s degree, Masters degree required.
Expertise with HBase, NOSQL, HDFS, JAVA map reduce for SOLR indexing, data transformation, back-end programming, java, JS, Node.js and OOAD.
Hands on experience in Scala and Python.
7 + years’ of experience in programing and data engineering with minimum 2 years’ of experience in Cloudera Hadoop.

***********************************************************
Additional Notes:
•1-year contract with potential for extension or FTE conversion
•Ideal candidate will come from Cloudera testing background
•Ability to determine how to automate and implement best practices
•Top Skillset: Cloudera, Spark, Hive, Sqoop, Flume, Kafka, HDFS, HBASE, Yarn, Sentry, Impala, SOLR
•Seeking candidates who are readily available
•Interview Process: Initial phone screen; second technical interview via bridge and final onsite