We’re #hiring. Know anyone who might be interested? Wheeler Staffing Partners is seeking an Azure Integration Developer. The Integration Developer is part of the Data Governance team and will help ensure that data assets are of proper quality and highly available. The role includes developing data integration processes and maintaining and supporting them in Microsoft Azure environment. Integration developer role requires 2 years of hands-on experience with Azure Data Factory. Location: Addison, TX - Hybrid Salary: $75k-90k (5% bonus) plus benefits Please contact Naeha Rashid nrashid@wheelersp.com for questions about this role and apply below via LinkedIn. #Wheelerstaffingpartners #IntegrationDeveloper #Integration #Azure #Azuredatafactory #DFW #AddisonTX #DFWjobs #ITjobs
Wheeler Staffing Partners’ Post
More Relevant Posts
-
Hi Hope you are doing good.!! Please let me know if you are interested in the position below!! Title : AZURE CLOUD ARCHITECT - HYBRID IN Location : HOPKINS, MN , Hybrid - 3 days a week Duration : 6 months Visa : Only USC Job Description Linkedin is must . 6 + years providing guidance on what to pay attention to and what to plan to test in regards to security and compliance on cloud migrations to azure with PII highly classified data in an enterprise wide environment. 6+ year as an architect with Cloud migrations azure big data. A lot of cloud building of components from quality and dev perspective on cloud with pure data as opposed to a customer facing product. This product is a locked down internal data warehouse the control are very different. 6 + years as an architect using Azure Microsoft team manager. "Team management" On cloud all of the virtualization container they manager traffic and access through sub nets. Pyspark, python, automation tools are a plus Thanks and Regards Sarfaraz Khan US IT Recruiter | Convex Tech In Email: sarfaraz@convextech.com #azure #azurearchitect #azuresolutionarchitect #azuresolutionsarchitect #cloudarchitect #azurecloudarchitect #hiring #c2c #jobs #c2cjobs #c2croles #corptocorp #corp2corp #c2crequirements #c2chiring
To view or add a comment, sign in
-
DevOps Engineer ♾️| Linux🐧|AWS ☁️| Docker 🐳 | Kubernetes ☸️ | CiCd 🚀 | Terraform 🏗️ |anSibLe⚙️ | Jenkins🧑🔧| SheLL - ScriPting 💠 | Grafana⛄ | Git & GitHub 🐙 | Prometheus ♨️| Python 🐍 | Technical Writer ✍️✍️
Latest Opportunity🔥🔥🔥🔥🔔🔔🔔🚀 #devopsjob #devops #devopscommunity #trainwithshubham #90daysofdevops #90daysofdevopschallenge #awsdevops #azure #azurecloud #hiring #azure #devopsengineer #awscommunity
We are urgently looking for #devops engineer smartData Enterprises Inc. for #Nagpuroffice #permanentemployment #workfromoffice Below are the mandatory skill set #VM, #Storageblobs, #AzurePipeline, #AzureDevOps, #Docker, #Kubernetes, #JenkinsCICD, #Newrelic, Kibana, #HelmChart, Kube-Api, server, etcd, Kube-controller-manager, Kube-scheduler, #Pods, #SonarQube, #Loadblancing, #AzureCDN. Min #Relevantexp - 6 Years #immediatejoiners #WFO #share cv on deepti.gaddamwar@smartdatainc.net or DM me here. Karmveer Prajapati Vijay Pandey
To view or add a comment, sign in
-
Hi #connections ROLE-Performance Engineer Location-REMOTE Exp-9+ years JOB DESCRIPTION Today we use a single Azure VM SKU type which has similar performance specifications to our internal Service Now Cloud data center server infrastructure, supporting Application, Database, and Search stacks. In Public Cloud there are multiple SKUs to choose from, it makes sense to test new SKUs to find one that is most optimal from a performance and cost perspective. Currently, the process to test Azure VM skus is very laborious. This role will be focused on establishing process and procedures as well as performing the analysis of the results and providing recommendations. Create a repeatable process that incorporates automation to allow easier testing of new SKU types. Establish Performance Modeling for customer Instances in both SN and Azure environments. Pre migration and post Establish baseline metrics and work with MSFT to Build KPI dashboards for critical resource utilization. Please share to jothi@techorbit.com #PERFORMANCE ENGINEER #Cloud
To view or add a comment, sign in
-
ALTA IT Services is #hiring an Architect III for #hybrid work in Reston, VA. Qualifications include: 🔍 Expertise in relational and NoSQL DBs. 🛠️ AWS Data Migration Service & Test Data management. ☁️ Cloud migration & microservices architecture. 💻 Skilled in AWS, development, and networking. 🛠️ Experience with JIRA and Confluence. Learn more and apply today: https://ow.ly/Z8bx50QuW4P #ALTAIT #ITJobs #RestonVA #DatabaseExpert #AWSMigration #CloudArchitecture #Microservices #AWSskills #JIRAExperience #ConfluenceExpert
To view or add a comment, sign in
-
-
We have an urgent requirement from our direct Client. Role: Certified Senior Cloud Architect Location: Atlanta, GA (Hybrid Role) Client: State of GA Please mention the expected rate, Visa status, updated resume, and current location of the candidates to receive a quick response. sqadri@encore-c.com #cloudcomputing #cloud #technology #cybersecurity #aws #bigdata #devops #it #datacenter #azure #cloudstorage #linux #programming #software #tech #iot #cloudservices #coding #cloudsecurity #machinelearning #informationtechnology #datascience #business #python #security #microsoft #dataprotection #networksecurity #data #artificialintelligence
To view or add a comment, sign in
-
Need Senior Candidates only. 100% Remote Only for W2 & 1099 Senior Cloud Architect Responsibilities: Evangelize and innovate to ensure the Platform is using the most reasonable & effective cutting-edge technologies Evangelize microservice based architecture using containerized applications. Experience strategizing?on-prem to cloud transformation for large scale applications Design and implement solutions that span GCP/GKE, CI/CD, monitoring, and security Collaborate and pair with the Platform team to ensure knowledge and expertise is shared & developed Stay current on industry trends; innovate through research, proof of concepts, and demos Shared responsibility for 24x7 research and resolution of production system problems through participation in an on-call rotation Support a “security first” advocacy and encourage platform solutions that enable the microservice product teams to “shift left” Ensure compliance with applicable security and privacy standards and regulations Participates in the recruitment of team members both employees and vendor employees Training and mentoring peers and management Technical Skills/Experience: Cloud: Google (preferred), AWS, Azure GCP tools: Cloud Data Fusion, Vertex AI, Dataflow, Pub/Sub Orchestration: GKE (preferred), AKS, EKS CI/CD: Jenkins, Harness Monitoring: Prometheus/Grafana (preferred), Stackdriver Database: Mongo, Cloud SQL Service Mesh: Istio API: Cloud Endpoints, Apigee Language/scripting: Python (preferred), Bash, Node.js Infrastructure as Code: Terraform EDW: BigQuery Data analysis: Splunk Security: PCI compliance, Prisma Collaboration/Issue tracking: Jira, Confluence Testing: Automation, System, Performance Agile – Scrum, Kanban Tools – Jira, Confluence Methodology: Agile Scrum, Kanban Who might be interested drop the resumes mail ID: bhargav.p@vuesol.com #gcpcloud #cloudarchitecture #awsdevops #database #cicd #w2 #scrum #agile #tools #methodologies #jenkins #hiring #infrastructure #testing #collaboration #dataanalysis #edw #languagemodels #azurecloudengineer #google
To view or add a comment, sign in
-
Dear #Techies, HIRING!!!! We are Hiring for an MNC client, Let's Connect and Share your resume at mentioned Email ID comment for better reach Saibabu Alla 📧 : sai.a@s3staff.com -Kindly Share this post with your Techie friends and Help me connecting more people, Job Title : Azure Data factory-Support Engineer Experience Needed: 3 - 5 years Location: Hyderabad Notice Period: Immediate Joiners * Starting 8 Months Could be Work On Night Shifts * Data movement Description: Roles and Responsibilities: · Experience in designing and hands-on development in cloud-based analytics solutions. Having 1-2 years of experience in Azure cloud services Having experience in Azure resources like Azure Data Factory, Azure Synapse Analytics, Azure Blob, ADLS, Azure Data Lake Storage Gen1 and Gen2, Logic Apps, Key Vault, Azure SQL DB and Synapse Hands-on experience in Azure Data factoryand its Core Concept's like Linked Services, Datasets. Data Flows, Pipelines, Activities and Triggers, Integration Runtime, Self-Hosted Integration Runtime. Designed and developed data ingestion pipelines from on-premises to different layers into Azure Built Dataflow transformations. Worked on Copy Data activity, Get Metadata Activity, look up, Filter, Store Procedure, For-each, IF and execute Pipeline activities Implemented dynamic pipeline to extract the multiple files into multiple targets with the help of single pipeline. Strong Knowledge on Parameterization of Linked Services, Datasets Pipelines, Activities. Pipeline execution methods (Debug vs Triggers). Experience in scheduling the pipelines and monitoring the pipelines through the monitor tab. Experience in creating alerts on the pipelines level and activity level. Stack (Including Compute, Function App, Blobs, Resource Groups, Azure SQL, Cloud Services, and ARM), Focusing on high - availability, fault tolerance, and auto-scaling. Performed custom activity using python, C#.net for unsupported task in Azure Data Factory V2. Customer support: Strong customer engagement skills to understand customer needs for Analytics solutions fully. Experience in leading small and large teams in delivering analytics solutions for customers. Have demonstrated ability to define, develop and implement data models and supporting policies, standards, and guidelines. Strong problem solving and troubleshooting skills azure cloud services with azure resources like ADF or azure synapse or azure data or azure Blob or adls mandatory skill Azure SQL DB #Immediatehiring #azuredatafactory #mncjobs #microsoft
To view or add a comment, sign in
-
#hiring Information Technology - Data Integration Engineer, Raleigh, United States, fulltime #jobs #jobseekers #careers #Raleighjobs #NorthCarolinajobs #ITCommunications Apply: https://lnkd.in/gZ5cCwB9 Data Integration Engineer12+ Months Contract 100% RemoteSkills Required: Looking for an engineer that has knowledge on Pipelines, azure devops pipeline work. How to manage database backups and restore Doing a lot of migration of databases for all the projects. Assigned into a project and each project has different stages - some of them are provisioning users, Needs to understand how authentication works. Integration and preparation - understanding how file transfers work and how SFTP get set up. Mainly data and file transfer protocols Database manipulations - receive database and run a series of pipelines to manipulate the data and add it into a configuration for the specific project they are working on. Follow up with necessary people. Adding users, refreshing the database 2-3times during project Maintaining the tasks during the project until they can take it live. Be preparing the script during the go live. Running pipelines, creating some SQL scripts to update information for the user, customer. Knowledge of Azure, SQL, Azure Databases, Azure Concepts, TSQL, Microsoft SQL language to Query. Knowledge of how to upload data into the cloud. Query or troubleshoot pipeline in ADO/Azure Devops. Write pipeline files. Understand how a cloud application works and will set them apartManaging cloud applications.
To view or add a comment, sign in
-
US IT Recruiter || Talent Acquisition Specialist || Recruitment Specialist || Recruitment Leadership || Recruiting
Hello Connections, Please find the JD and share resumes to kawaljeet@dhalite.com AWS DATA Engineer USC only Remote Need 10+ Exp candidates Rate:- 60- 65 negotiable Role Description: Data Pipeline Development: Design, implement, and manage robust data pipelines using Python, PySpark, SQL to efficiently extract, transform, and load data from diverse sources(Batch & Streaming) AWS Expertise: Demonstrate expertise in core AWS services such as AWS DMS, AWS Glue, AWS Step Functions, Amazon S3, Amazon Redshift, Amazon RDS, Amazon EMR, AWS IAM, AWS LAMBDA etc., and apply them to build scalable and reliable data solutions. Data Modeling: Develop and maintain efficient data models to support the analytical and reporting needs. Database Management: Administer databases using AWS services like Amazon RDS or Amazon Redshift, focusing on schema design, performance optimization, and monitoring. Data Warehousing: Utilize Amazon Redshift or Amazon Snowflake to create high-performing analytical databases that empower data-driven decision-making. ETL Best Practices: Implement industry best practices for ETL processes, including data validation, error handling, and data quality checks. Performance Optimization: Optimize query performance through continuous tuning of databases and leveraging AWS's scalability capabilities. Monitoring and Logging: Establish robust monitoring and logging mechanisms using AWS CloudWatch, Amazon CloudTrail, or comparable tools to ensure pipeline reliability. Security and Compliance: Ensure adherence to security best practices and relevant compliance standards, tailoring solutions to meet GDPR, HIPAA, or other regulatory requirements. Automation: Drive automation of deployment and scaling of data pipelines using infrastructure as code (IaC) tools like AWS CloudFormation and Terraform. Collaboration: Collaborate closely with cross-functional teams, including data scientists, analysts, and other stakeholders, to understand their data needs and provide effective solutions. Continuous Learning: Stay updated on the latest developments in AWS services and data engineering methodologies, applying new insights to enhance our data infrastructure. Soft Skills: Exhibit strong communication skills to facilitate effective teamwork and interaction with diverse groups #dataengineer #dataengineers #aws #awsdataengg #awsdataengineers #vendors #c2c #c2chiring #c2cjobs #hotlist #urgentopening #urgenthiring #urgentopenings #urgentrequirement #usc #citizens #remote #remotejobs #share #email
To view or add a comment, sign in
-
#hiring DevOps Engineer with Databricks on AWS - Fulltime candidates only No C2C Email Resumes ekta.Khosla@infinite.com Dallas,TX or TAmpa, FL Infrastructure Automation: Designing, implementing, and maintaining automated infrastructure provisioning and configuration management for Databricks clusters on AWS using tools like Terraform, CloudFormation, or Ansible. Continuous Integration/Continuous Deployment (CI/CD): Developing and maintaining CI/CD pipelines for Databricks workloads on AWS, ensuring automated testing, deployment, and monitoring of data pipelines and analytics solutions. Cluster Management: Managing Databricks clusters on AWS, including provisioning, scaling, and optimization for performance, cost-efficiency, and reliability. Monitoring and Logging: Implementing monitoring and logging solutions for Databricks clusters and workloads on AWS using tools like CloudWatch, Prometheus, Grafana, or ELK stack to ensure visibility into system performance and health. Security and Compliance: Implementing security best practices for Databricks deployments on AWS, including IAM policies, encryption, network security, and compliance with data privacy regulations. Backup and Disaster Recovery: Implementing backup and disaster recovery strategies for Databricks data and workloads on AWS to ensure data integrity and business continuity. Cost Optimization: Optimizing Databricks usage and AWS infrastructure costs by right-sizing clusters, implementing cost allocation tags, and monitoring resource utilization. Collaboration and Documentation: Collaborating with data engineers, data scientists, and other stakeholders to understand requirements and provide infrastructure support. Documenting infrastructure configurations, processes, and best practices. Troubleshooting and Support: Providing troubleshooting and support for Databricks-related issues, working closely with AWS support and Databricks technical support teams as needed. Knowledge Sharing: Sharing knowledge and best practices with the broader team through documentation, training sessions, and mentorship to promote DevOps culture and practices within the organization. Continuous Improvement: Continuously evaluating and adopting new tools, technologies, and practices to improve the efficiency, reliability, and scalability of Databricks deployments on AWS. Vendor Management: Managing relationships with AWS and Databricks vendors, including license management, support agreements, and staying informed about product updates and roadmap changes. #devops #databricks #aws #hiringnow #hiringalert #ansible #terraform #devopsengineer
To view or add a comment, sign in