Data Architect II

Phoenix, AZ, USA

Job Type

Data

Please note: This position has the possibility of working remotely within 150 miles of Phoenix, Arizona for a percentage of the scheduled work time, as determined by the supervisor. If hired, you will be required to comply with the company’s requirement of all employees who are working or attending a meeting onsite at a corporate office location, to submit proof that they are fully vaccinated against COVID-19 prior to entry, unless the company has granted them a medical or religious accommodation.

Role Summary:

You will be an independent contributor on a global Data Architecture and Modeling team pursuing a vision of analytics-driven mining. Your expertise in data architecture, data modeling, and ETL will enable and empower the organization to maintain a robust and trusted Enterprise Data Warehouse. At the company, it is understood that the data does not reach its full potential until it is analyzed, and insights effectively communicated to the enterprise. You will work in close collaboration with subject matter experts, data engineers, business intelligence analysts, data scientists, and software engineers to develop advanced, highly automated data products.

Essential Duties and Responsibilities:

  • Work as a project leader in cross-functional, geographically distributed agile teams of highly skilled delivery teams to continuously innovate analytic solutions.

  • Ensure agile team delivers on time by defining clear goals, identifying appropriate development patterns, maintaining a plan for execution, and actively secure its implementation through the scrum process

  • Develop data requirements through data modeling techniques and structured working sessions.

  • Take action to express data requirements as 3rd Normal Form logical data models through review of source system documentation, review of system features, and workshop sessions.

  • Create physical database design for the Snowflake data warehouse and other database technologies in partnership with other technical resources

  • Develop documentation of Data Lineage and Data Dictionaries to create a broad awareness of the enterprise data model and its applications

  • Develop real-time/bulk data pipelines from a variety of sources (streaming data, APIs, data warehouse, messages, images, video, etc)

  • Partner with key business SMEs to build and manage the workgroup database view library by building relevant data shapes in SQL

  • Apply best practices within DataOps (Version Control, P.R. Based Development, Schema Change Control, CI/CD, Deployment Automation, Test Automation, Shift left on Security, Loosely Coupled Architectures, Monitoring, Proactive Notifications)

  • Provide thought leadership in problem solving to enrich possible solutions by constructively challenging paradigms and actively soliciting other opinions. Actively participate in R&D initiatives

  • Ensure the project team utilizes modern cloud technologies, follow established design patterns, and employ best practices from DevOps/DataOps to produce enterprise quality production Python and SQL code with minimal errors. Identify and direct the implementation code optimization opportunities during code review sessions and proactively pull in external experts as needed.

  • Flexibly seek out new work or training opportunities to broaden experience. Independently research latest technologies and openly discuss applications within the department. Actively coach and mentor junior team members.

Qualifications:

Minimum Requirements:

  • Bachelor’s degree in engineering, computer science, analytical field (Statistics, Mathematics, etc.) or related discipline and five (5) years of relevant work experience OR

  • Master’s or Ph.D. in engineering, computer science, analytical field (Statistics, Mathematics, etc.) or related discipline and three (3) years of relevant work experience

  • Knowledge of data model using IDEF1X or similar data modeling methodologies

  • Knowledge of data modeling tools like ER Studio or Erwin

  • Proficient practitioner of SQL development

  • Experience leading joint design sessions and working in groups

  • Strong verbal and written skills in English language

Preferred Qualifications:

  • Proficient practitioner of Python development

  • Working knowledge of Software Engineering and Object Orient Programming Principles

  • Working knowledge of Parallel Processing Environments such as Snowflake or Spark SQL.

  • Working knowledge of problem solving/root cause analysis on Production workloads

  • Working knowledge of Agile, Scrum, and Kanban

  • Working knowledge of enterprise scheduling and workflow orchestration using tools such as Airflow, Prefect, Dagster, or similar tooling

  • Working knowledge with CI/CD and automation tools like Jenkins or Azure DevOps

#LI-SH1