about netradyne:
founded in 2015, netradyne is a technology company that leverages expertise in artificial intelligence, deep learning, and edge computing to bring transformational solutions to the transportation industry. netradyne's technology is already deployed in thousands of vehicles; and our customers drive everything from passenger cars to semi-trailers on interstates, suburban roads, rural highways-even off-road.
netradyne is looking for talented engineers to join our analytics team comprised of graduates from iits, iisc, stanford, uiuc, ucsd etc. we build cutting edge ai solutions to enable drivers and fleets realize unsafe driving scenarios in real-time to prevent accidents from happening and reduce fatalities/injuries.
job title: staff data engineer - ml
experience: 5-8 years
role and responsibilities:
you will be embedded within a team of machine learning engineers and data scientists; responsible for building and productizing generative ai and deep learning solutions. you will:
- design, develop and deploy production ready scalable solutions that utilizes genai, traditional ml models, data science and etl pipelines
- collaborate with cross-functional teams to integrate ai-driven solutions into business operations.
- build and enhance frameworks for automation, data processing, and model deployment.
- utilize gen-ai tools and workflows to improve the efficiency and effectiveness of ai solutions.
- conduct research and stay updated with the latest advancements in generative ai and related technologies.
- deliver key product features within cloud analytics.
requirements:
- b. tech, m. tech or phd in computer science, data science, electrical engineering, statistics, maths, operations research or related domain.
- strong programming skills in python, sql and solid fundamentals in computer science, particularly in algorithms, data structures, and oop.
- experience with building end-to-end solutions on aws cloud infra.
- good understanding of internals and schema design for various data stores (rdbms, vector databases and nosql).
- experience with gen-ai tools and workflows, and large language models (llms).
- experience with cloud platforms and deploying models at scale.
- strong analytical and problem-solving skills with a keen attention to detail.
- strong knowledge of statistics, probability, and estimation theory.
desired skills:
- familiarity with frameworks such as pytorch, tensorflow and hugging face.
- experience with data visualization tools like tableau, graphana, plotly-dash.
- exposure to aws services like kinesis, sqs, eks, asg, lambda etc.
- expertise in at least one popular python web-framework (like fastapi, django or flask).
- exposure to quick prototyping using streamlit, gradio, dash etc.
- exposure to big data processing (snowflake, redshift, hdfs, emr)