About Pivot Bio:
At Pivot Bio, we are working together to transform agriculture, finding smarter, more sustainable and, ultimately, more profitable ways for farmers to grow. Working with and for farmers, we’re using cutting-edge science to create a microbial nitrogen for the world’s most vital crops. We are replacing synthetic fertilizers with a more sustainable, nature-driven plant nutrition that benefits farmers, consumers and the planet.
As a Software Engineer II – Data Platform at Pivot Bio, you will play a key role in making data actionable at all levels of the organization to inform decisions, actions, and strategy. You will be a member of our talented Data Platform scrum team as you help design and implement data processing pipelines. The Data Platform scrum team is with the Scientific Computing team and is responsible for creating software systems which store, process, and make experimental data and other key business information accessible across the organization.
You will be responsible for ensuring our systems are functioning as expected, implementing functional improvements, and empowering business users to use BI tools to gain valuable business insights. You will be accountable for data quality, modeling, enrichment, and availability. You will also manage and monitor Master Data Management (MDM) integrations between business systems and be part of larger integration activities.
People who excel on our team hold themselves to a high standard of quality while working independently. They give and receive honest feedback. They are empathetic, proactive, and passionate. They think exponentially and embrace a fast-paced and high-growth environment. If that sounds like you, come join us and be a part of the solution to climate change.
Reasonable accommodations may be made to enable individuals with disabilities to perform these essential functions.
- Write and maintain high-quality, robust, maintainable, and well-documented
- Collaborate with cross-functional teams to identify requirements, design solutions, and implement features.
- Design and implement normalized database and denormalized data warehouse schemas
- Participate in code reviews and provide constructive feedback to
- Handle support requests and debug issues
- Communicate technical solutions to non-technical stakeholders in a clear and concise manner
- Continuously improve software development practices and
- Stay up to date with emerging trends and technologies in software development and web application development.
Competencies
- Collaborate with cross-functional teams to identify requirements, design solutions, and implement features.
- Design and implement normalized database and denormalized data warehouse schemas
- Communicate technical solutions to non-technical stakeholders in a clear and concise manner
- Continuously improve software development practices and
- Stay up to date with emerging trends and technologies in software development and web application development.
- Embody our core values – solve creatively, act fearlessly, model openness, and inspire
- Excellent problem-solving, analytical, and communication
- Well-organized and strong attention to detail
- Ability to work independently as well as collaboratively in a team
- Proficiency in Python or similar dynamic languages (Ruby, js) to build data products or processing pipelines.
- Proficiency in SQL, ETL tools, and data modeling
- Strong knowledge of data warehousing concepts, techniques, and best practices
- Experience with cloud-based data warehousing solutions like Databricks, Snowflake, or Redshift
- Strong understanding of software development principles, such as object-oriented programming, design patterns, and agile methodologies.
- Experience with relational database technologies such as PostgreSQL, MySQL, or SQL
Work environment:
- Job will involve mostly office and computer-based work
- Repeating motions that may include the wrists, hands, or fingers
- Sedentary work that involves sitting/standing
- Communicating with others to exchange information
Travel required: Less than 10%
Required education and experience
- Bachelor's, Information Technology, Engineering, or related field or equivalent industry experience
- 3+ years industry experience with 1+ years in data warehousing, pipelining, or analysis
- Experience with: Python, Databricks, AWS, Terraform, GitHub, CircleCI, React, Postgres, Airtable, Tableau, Asana, Zendesk, Nuclino; We have a small footprint in these technologies: Docker, Fivetran, Node, Express.js, Vue.js, Ruby, R, AzureAD, Smartsheets
- Familiarity with data visualization tools such as Tableau, Spotfire, or Power BI
- Experience with distributed computing frameworks such as Hadoop or Spark
- Experience with cloud-based development, deployment, and hosting (AWS, GCP, or Azure).
- Experience provision infrastructure using Terraform or other IaC tools
- Experience with Docker, Linux, Bash
- Demonstrated ability to learn new technologies
- Background in agriculture, science, or research a plus
- Contribution to open-source projects a plus
What we offer:
- Competitive package in a disruptive startup
- Stock options
- Health/Dental/Vision insurance with employer-paid premiums
- Life, Short-Term and Long-Term Disability policies
- Employee Assistance Program with free referrals and discounts
- 401(k) plan, 3% Match
- Commuter benefits
- Annual Training & Development support
- Flexible vacation policy with a generous holiday schedule
- Exciting opportunity to work with a talented and fun team
*Internal employees, please apply by clicking on the Internal Job Board icon on NSIDER
All remote positions and those not located in our Berkeley facility are paid based on National Benchmark data. Following employment, growth beyond the hiring range is possible based on performance.
Hiring Compensation Range
$75,000—$94,000 USD