Please note that this role requires security clearance at SC level. You must be SC cleared to be considered for the role. Tasks and Responsibilities: Engineering: Ingestion configuration. Write Python/PySpark and Spark SQL code for validation/curation in notebook. Create data integration test cases. Implement or amend worker pipelines. Implement data validation/curation rules. Convert data model into technical data mapping. Implement integrations. Data migration implementation and execution. Data analysis (profiling, etc.). Azure DevOps/GitHub configurations for ADF and Databricks code. Our Ideal Candidate: Strong experience in designing and delivering Azure-based data platform solutions, including: Azure ADF and Databricks, Azure Functions, App Service, Logic Apps, AKS, Azure App Service, Web App. Good knowledge of real-time streaming applications, preferably with experience in Kafka real-time messaging or Azure Functions, Azure Service Bus. Spark processing and performance tuning. File formats partitioning (e.g., Parquet, JSON, XML, CSV). Azure DevOps/GitHub. Hands-on experience in at least one of Python with knowledge of the others. Experience with synchronous and asynchronous interface approaches. Knowledge in data modeling (3NF/Dimensional modeling/Data Vault 2). Work experience in agile delivery. Able to provide comprehensive documentation. Able to set and manage realistic expectations for timescales, costs, benefits, and measures for success. Nice to Have: Experience with integration and implementation of data cataloging tools like Azure Purview/Collabra. Experience in implementing and integrating visualization tools like Power BI/Tableau, etc. Experience in C# application development (ASP.NET MVP). #J-18808-Ljbffr
Contact Detail:
Cognizant Recruiting Team