www.acad.jobs : academic jobs worldwide – and the best jobs in industry
                
     
Position: Senior, Data Scientist
Institution: Walmart Inc.
Location: Sunnyvale, California, United States
Duties: Build advanced feature extraction algorithms that feed into various advertising applications, including audience targeting, relevance and ranking, and performance optimization; Build, deploy and monitor machine learning and statistical models to predict or estimate key signals that are used to optimize advertising product performance; Work on ML Automation platforms to support end to end machine learning life cycle using (Mlflow/KubeFlow/MetaFlow); Build, maintain and monitor scalable data pipelines working with distributed computing frameworks (e.g., Hadoop/Spark) and job schedulers (Airflow/Azkaban) to support modeling and optimization products
Requirements: Experience with backend software development, system architecture, and object-oriented design; Experience with one of the following high-level programming languages in production code quality: Python or Java; Experience with big data distributed computing frameworks including Hadoop, MapReduce, and PySpark; Experience with machine learning modeling and data analysis tools including Pandas, SQL, Numpy, Scikitlearn, TensorFlow, PyTorch; Experience developing and implementing machine learning models to solve real-world problems and fit algorithms to specific use cases
   
Text: Senior, Data Scientist Build advanced feature extraction algorithms that feed into various advertising applications, including audience targeting, relevance and ranking, and performance optimization; Build, deploy and monitor machine learning and statistical models to predict or estimate key signals that are used to optimize advertising product performance; Work on ML Automation platforms to support end to end machine learning life cycle using (Mlflow/KubeFlow/MetaFlow); Build, maintain and monitor scalable data pipelines working with distributed computing frameworks (e.g., Hadoop/Spark) and job schedulers (Airflow/Azkaban) to support modeling and optimization products Experience with backend software development, system architecture, and object-oriented design; Experience with one of the following high-level programming languages in production code quality: Python or Java; Experience with big data distributed computing frameworks including Hadoop, MapReduce, and PySpark; Experience with machine learning modeling and data analysis tools including Pandas, SQL, Numpy, Scikitlearn, TensorFlow, PyTorch; Experience developing and implementing machine learning models to solve real-world problems and fit algorithms to specific use cases
Please click here, if the job didn't load correctly.







Please wait. You are being redirected to the job in 3 seconds.