Big Data Engineer
4 to 8 Years
- Implement various solutions arising out of large data processing (GB’s/ PB’s) over various NoSQL, Hadoop and MPP based products—both on-premise and in the cloud
- Actively participate in various architecture and design calls with big data customers
- Develop Hive scripts and be involved in writing MapReduce jobs
- Implement complex projects dealing with the considerable data size (GB/PB) and with high complexity
- Leverage your experience with Hadoop and software engineering to help our clients drive value from their data
- Work with Sr. Architects and providing implementation details to offshore
- Conduct sessions/ writing whitepapers/ Case Studies pertaining to big data
- Responsible for timely and quality deliveries.
- Fulfill organization responsibilities – sharing knowledge and experience with other, passionate Impetus professionals, conducting various technical development sessions and training on new technologies.
- Bachelor’s/Master’s in Computer science or related
- Strong Java/Scala experience
- Strong previous professional experience building Distributed Solutions dealing with high volumes of data
- Hands on experience on HDFS, Hive, Pig, Sqoop and NOSQL
- Experience/ knowledge working with batch processing/ real-time systems using various open-source technologies like Solr, Spark, Storm, Kafka, etc.
- Experience in Apache Spark and/or Spark Streaming (at least 6 months)
- Good understanding of algorithms, data structure, performance optimization techniques and exposure to complete SDLC and PDLC
- Well aware of architectural concepts (Multi-tenancy, SOA, SCA etc.) and NFR’s (performance, scalability, monitoring etc.)