Want more jobs like this?Getjobsdelivered to your inbox every week.
Want more jobs like this?
Getjobsdelivered to your inbox every week.
Select a locationGet Jobs
Select a location
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you’ll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.Your Role and ResponsibilitiesAs a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include:Strategic Data Model Design and ETL Optimization:Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.Robust Data Infrastructure Management:Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.Seamless Data Accessibility and Security Coordination:Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.PHJP2024Required Technical and Professional ExpertiseProficient with AWS Data Platform components for Data Lakehouse - AWS S3, RedShift, RedShift Spectrum, AWS Glue with Spark, AWS Glue with Python, Lambda functions with Python, AWS Glue Catalog and AWS Glue Databrew, Dynamo DB, AuroraProficient with AWS Kinesis and Managed Streaming for Apache KafkaProficient with using other open source technologies like Apache Airflow and dbt, Spark / Python or Spark / Scala on AWS PlatformExperience in developing batch and real time data pipelines for Data Warehouse and DatalakeExperience in using DataBricks services on AWS platformExperience in Scheduling and managing the data services on AWS PlatformAmenable to work on aclient-based schedule(dayshift, mid-shift, night-shift) and in anyIBM location in Quezon City(Eastwood and/or UP Ayala Technohub)or Cebu.Preferred Technical and Professional ExpertiseJLPT N1-N3 certification is preferredExperience in Big Data tools such as Python, Hadoop, Hive, or SparkProven background in SQL, Unix/Linux, and ETL process