Requirements
4+ years experience as a data scientist involved in data extraction, analysis and modeling.
4+ years of experience in Python and SQL
Strong understanding of statistics
Proficiency in machine learning algorithms and all stages of machine learning.
Familiarity with neural networks and deep learning.
Familiarity with AWS services and Snowflake (or similar SQL DB)
Familiar with containerization (e.g., Docker) and API frameworks (e.g., Flask).
Demonstrated ability to troubleshoot issues in production environments, including debugging data pipelines or model related errors.
What Can Help Your Application Stand Out:
Successful end-to-end delivery of data science products.
Exposure to MLOps tools like MLFlow, KubeFlow, DVC,AWS Sagemaker, Seldon etc
Experience deploying models in a AWS cloud environment - with specific experience with AWS tools such as Sagemaker and Step Functions.
Expertise with Natural Language Processing and Understanding.
Experience with libraries and frameworks for training ML and DL models (PySpark, Tensorflow).
Experience with LLMs/Generative AI
Our Commitment to Inclusivity and Diversity
At G2, we are committed to creating an inclusive and diverse environment where people of every background can thrive and feel welcome. We consider applicants without regard to race, color, creed, religion, national origin, genetic information, gender identity or expression, sexual orientation, pregnancy, age, or marital, veteran, or physical or mental disability status. Learn more about our commitments here.
--
For job applicants in California, the United Kingdom, and the European Union, please review this applicant privacy notice before applying to this job.