Position summary: This is an exciting opportunity for an experienced developer of large-scale data solutions. We are seeking someone with good experience in building solutions using Microsoft Azure services and a proven track record in delivering high quality work to tight deadlines.
You will be responsible for.
• Designing and implementing highly performant data ingestion pipelines from multiple sources using Azure Databricks
• Delivering and presenting proofs of concept to of key technology components to project stakeholders.
• Developing scalable and re-usable frameworks for ingesting of data sets
• Integrating the end to end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained
• Working with event based / streaming technologies to ingest and process data
• Working with other members of the project team to support delivery of additional project components (API interfaces, Search)
• Evaluating the performance and applicability of multiple tools against requirements
• Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
Competencies: The position must have the following competencies:
• Strong Communication
• Building trust
• Decision making / problem solving
• Delegating Responsibility
• Customer / client focus
• Planning & Organizing
• Managing Stress levels at critical times
• Technical / professional knowledge
• Team Player
• Troubleshooting skills in Azure Databricks.
What you’d have?
• B.E/B. Tech/MCA with 4-6 years of experience in Azure Databricks Development.
• Strong knowledge of Data Management principles
• Experience in building ETL / data warehouse transformation processes
• Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably Databricks).
• Experience using associated design and development patterns
• Microsoft Azure Big Data Architecture certification.
• Hands on experience designing and delivering solutions using the Azure Data Analytics platform (Cortana Intelligence Platform) including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics
• Experience with Apache Kafka / Nifi for use with streaming data / event-based data
• Experience with other Open Source big data products Hadoop (incl. Hive, Pig, Impala)
• Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J)
• Experience working in a Dev/Ops environment.
Why join us?
We thought you would never ask! We offer all the usual stuff: competitive salary, flexible working hours, challenging product culture but the real perks are:
• Challenging and fun work environment solving meaningful real-life business problems - you will never have a boring day at the office.
• World-class team who love solving tough problems and have a bias for action. Tanla is an equal opportunity employer.
We welcome and encourage diversity in the workplace regardless of race, gender, religion, age, sexual orientation, gender identity, disability, or veteran status.