DevOps and Machine Learning Engineer - Indefinite (BB-B03B1)
Encontrado en: Neuvoo Bulk CR
We are looking for a data engineer to help our customer building escalable and sustainable data access capabilities for the IT infrastructure.
Self motivated and proactive individual that can lead and implement iniciatives using differente technologies
• Data engineer to help us create scalable, sustainable data access capabilities:
• Proficiency in AWS, Drone, and GitHub
• Knowledge working and designing CI/CD frameworks
• NOSQL paradigms for data (we use HDFS and graph frameworks)
• Think beyond EDW and ESB frameworks when wrangling data
• Agile frameworks
• Kafka, Spark, Pulsar
• Hadoop as datalake
o Bachelor’s degree from an accredited college/university in Computer Science or related field
o In-depth knowledge of model deployment, tuning, and performance monitoring
o Experience in software development environment and code management/versioning (e.g. git).
o Ability to understand complex and ambiguous business needs and applying the right tools and approaches.
o Curious, self-motivating, driven and have a passion for problem solving.
o Collaborative team player.
o Strong experience working in a cloud environment e.g. Amazon Web Services. such as: S3, Dynamo DB, Lambda, API Gateway, RDS, ECS
o Experience working in Hadoop or Spark frameworks.
o Experience with continuous integration and deployment technologies such as: Jenkins CI, Drone, Github, and Artifactory
o Experience with container technology like Docker and Kubernetes
o Experience with configuration management and automation tools such as: Chef, Puppet, and Ansible
o Experience architecting, deploying and maintaining Machine Learning models in production
o Experience building REST API’s and Cron jobs for ML inference
o Excellent communication skills, both written and verbal.
o Experience building machine learning models
o Proficiency with ML frameworks such as: tensorflow, scikit-learn, pytorch, caret, or parsnip
o Ability to present technical solutions to non-technical persons in an easy to understand way.
o Experience with stream processing with tools like Kafka or Flink
o Experience in the Agriculture, CPG, or Financial Services industry
o Experience with AWS Sagemaker
o Strong SQL skills and experience/knowledge
o In-depth knowledge of databases, Hadoop, and distributed computing frameworks
o Experience working across multiple time zones with multiple cultures
o Experience with infrastructure automation, infrastructure as code, automated application deployment, monitoring/telemetry, logging, reporting/dashboarding
calendar_todayhace 2 días
location_onSanta Ana, Costa Rica
work Infotree Service Inc