CV Data : Les derniers indépendants inscrits

Je dépose une mission gratuitement
Je dépose mon CV

Les derniers profils Data connectés

CV Data Analyst
Aya

Data Analyst

  • Villeurbanne
Excel Word PowerPoint SPSS Microsoft Power BI VBA SQL HTML
Disponible
CV Ingénieur systèmes et réseaux
Tristan

Ingénieur systèmes et réseaux

  • COLMAR
Cisco Windows Exchange Cloud AWS Active Directory Azure SQL Linux VMware
CV Expert/CP Data/BI
Abaid

Expert/CP Data/BI

  • PARIS
SAP BI Azure Synapse BODS SAP BO Azure Data Factory SQL SAP SQL Server Databricks SAP BW
Bientôt disponible
CV Data Scientist Data analyst
Tom

Data Scientist Data analyst

  • BORDEAUX
Data Python SQL R VBA LLM Hugging Face Transformers Regression Algorithms Cloud AWS Google Cloud Platform
Bientôt disponible
CV Data Analyst
Mokrane

Data Analyst

  • SAINT-GRATIEN
Agile Jira Tableau SQL Data Python Power BI Microsoft Azure Machine Learning Deep Learning MS Project
Disponible
CV Chef de projet MICROSOFT DYNAMICS CRM
Sylvain

Chef de projet MICROSOFT DYNAMICS CRM

  • ARRAS
Microsoft Dynamics CRM CRM Zoho CRM
Disponible
CV Product Owner
Emery

Product Owner

  • ASNIÈRES-SUR-SEINE
Agile Jira Maîtrise d'ouvrage Scrum Excel Data SQL Python JavaScript C
Disponible
CV Manager projets / PM Product owner
Moulay

Manager projets / PM Product owner

  • PARIS
Agile Jira Drupal Transformation digitale Data
Disponible
CV Ingénieur DevOps Sénior
Mondher

Ingénieur DevOps Sénior

  • VÉLIZY-VILLACOUBLAY
Java Linux Ansible Red Hat OpenShift Docker Oracle Dollar Universe GitLab Jenkins Kubernetes
Disponible
CV Architecte Integration SAP, Data & IA (API, Python)
Abdelkarim

Architecte Integration SAP, Data & IA (API, Python)

  • BRÉTIGNY-SUR-ORGE
Python Linux Ubuntu Systèmes embarqués Agile Azure C PHP Data Science SAP
Disponible
Je trouve un CV Data
Vous êtes freelance ?
Sécurisez votre activité grâce au portage salarial !

Aperçu d'expériences de Mehdi,
freelance DATA habitant Paris (75)

  • Bouygues Telecom – DSI/EWD
    Jan 2018 - Jan 2021

    Project « eSIM »:
    Missions: Project « eSIM »: Embedded SIM (eSIM, or eUICC) technology allows mobile users to
    download a carrier profile and activate a carrier's service without having a physical SIM card.
    Position: Lead Data Engineer.
    • Excel in guiding the work of technical teams. Articulated project goals and scope, translated
    business needs into technical terms
    • Use continuous improvement approach to review and improve the existing processes, always with
    an aim of improving the cycle time, reduce churn and reduced unit costs, to ensure our business
    and department’s objectives are met (productivity metrics and key customer service indicators)
    • Establish and promote database management principles, models, best practices, standards and
    ensure their practical adoption. Collaborates with the development teams to establish data quality
    baselines.
    • Lead the effort to build, implement and support the data infrastructure; ingest and transform data
    (ETL/ELT process) & programming/scripting languages such as Pyspark, Scala.
    • Define and maintain the testing strategy and various test plans. This should cover both automation
    and manual testing.
    • Build fault tolerant, adaptive and highly accurate data computational pipelines. Tune queries
    running over billion of rows of data running in a distributed query engine.
    • Performed end-to-end Architecture & implementation assessment of various AWS services like
    Amazon EMR, Redshift, S3. Implemented the machine learning algorithms using python to
    predict the quantity a user might want to order for a specific services so we can automatically
    suggest using kinesis firehose and S3 datalake.
    • Used AWS EMR to transform and move large amounts of data into and out of other AWS data
    stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon
    DynamoDB.
    • Used Spark SQL for Scala, Python interface that automatically converts RDD case classes to
    schema RDD. Import the data from different sources like HDFS/HBase into Spark RDD and
    perform computations using PySpark to generate the output response.
    • Creating Lambda functions with Boto3 to deregister unused AMIs in all application regions to
    reduce the cost for EC2 resources.

  • Société générale – DSI CFT
    aujourd'hui

    Project « Finance District » :

    Mission: Finance District supports the business areas of the CFT IT department in the design and
    implementation of solutions related to data
    Position: Technical Lead Big Data Engineer.

    Lead tasks:
    ❖ Daily technical support for data engineers.
    ❖ Implementation of a monitoring module for the execution plan of Quartier Finance.
    ❖ Design of a data quality management module.
    ❖ Design of a production job monitoring module.
    ❖ Supervision of the technical and functional migration from Talend Big data to Scala/Spark.
    ❖ Ensuring the migration of Spark 2.1 projects to 2.4.
    ❖ Implementation of a data anonymization solution.
    ❖ Implementation of the Scala/Spark CI/CD pipeline.
    ❖ Ensuring the migration from Hortonworks to CDP.
    Technical tasks:
    • Analyzing and implementing "hot fixes" in production.
    • Implementation of a Scala/Spark framework to facilitate and standardize Scala/Spark
    developments.
    • Developing a solution for sending files via WebHdfs.
    • Tuning the performance of Scala/Spark applications for batch interval, parallelism, and memory.
    • Optimization of existing algorithms in Hadoop using SparkSession, Spark-SQL, Data Frames,
    and Pair RDDs.
    • Manipulating large datasets using Partitions, Spark memory capabilities, Spark Broadcasts, and
    efficient Joins.
    • Developing audit logic to optimize the load in append mode.
    • Developing solutions for pre-processing large sets of structured and semi-structured data with
    different formats (text files, Avro, sequence files, JSON records).
    • Use of Parquet, ORC, Avro files according to the technical need.
    • Study of the choice of Spark partitions for "write HDFS" and the calculation of the coalesce to
    avoid the shuffle
    • Implementation of performance improvements based on job monitoring via SparkUI
    • Adding "persist" via Spark and "serialization" of data as needed
    • Use of RDD, DataFrame, DataSet according to the technical need.
    • Development of some UDF on Spark 2.4
    • Deployment and Orchestration of the project via ControlM
    • Use of scalaTest to ensure unit tests and code coverage at the SonarQube level
    • Development of test cases, test scenarios with TDD logic using Cucumber.
    • Log recovery via Scala/Kafka & Spark Streaming from YARN for analysis and monitoring of
    the application.

  • Project « PRORATA VAT »
    aujourd'hui

    Mission: Equip accounting and fiscal business with a digital solution for calculating the "PRORATA
    TVA" tax base
    Position: Technical Lead Big Data Engineer.
    Lead tasks:
    ❖ Facilitate or lead agility ceremonies.
    ❖ Implementation of the Data solution architecture using "Scala/Spark" and establishing a link
    with visualization tools.
    ❖ Optimization of the performance and scalability of "Scala/Spark" data systems for jobs.
    ❖ Ensuring the quality and reliability of data.
    ❖ Synchronize with web restitution and PowerBI teams.
    ❖ Collaborate with security teams to ensure the security of data.
    ❖ Definition of standards and best practices for data projects.
    Technical tasks:
    • Writing technical specifications.
    • Definition of big data architecture.
    • Scala/Spark development of calculations for the "prorata tva" bases.
    • Implementation of Scala/Spark jobs.
    • Development of data extraction jobs from APIs.
    • Analyze, design and build Modern data solutions using Azure PaaS service to support visualization
    of data. Understand current Production state of application and determine the impact of new
    implementation on existing business processes.
    • Extract Transform and Load data from Sources Systems to Azure Data Storage services using a
    combination of Azure Data Factory, T-SQL, Spark SQL and U-SQL Azure Data Lake
    Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage,
    Azure SQL, Azure DW) and processing the data in In Azure Databricks.
    • Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and
    load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, writeback tool and backwards.
    • Developed Spark applications using Pyspark and Spark-SQL for data extraction, transformation
    and aggregation from multiple file formats for analyzing & transforming the data to uncover
    insights into the customer usage patterns.
    • Responsible for estimating the cluster size, monitoring and troubleshooting of the Spark
    databricks cluster.
    • Experienced in performance tuning of Spark Applications for setting right Batch Interval time,
    correct level of Parallelism and memory tuning

  • Project « C3S TAX »
    aujourd'hui

    Mission: Proposal for a technical data/big data and Azure Cloud architecture to meet the need for
    industrialization of the C3S tax calculation.
    Position: Data Architect
    Architect tasks:
    ❖ Define an architectural roadmap and technological standards for the development of the Big Data
    solution and link it with the web components and PowerBI.
    ❖ Define the strategy for acquiring external data and their exploitation in the datalake.
    ❖ Define the data storage architecture in Azure Synapse for the use of PowerBI.
    ❖ Implement Trino Cluster to accelerate data reading from the web to the datalake.
    ❖ Participate in the project's ARB to present the Technical architecture of the project to the head of
    the IT department, the RSI, and the project sponsor.
    ❖ Anticipate security issues and work jointly with security consultants to obtainn “GoLive By
    Design”.

  • Project « Asterix »
    aujourd'hui

    Missions: The project - code-named project « Asterix » - aims at accelerating the roll out
    of FTTH across medium-dense regions in France. SDAIF will “co-invest” by acquiring
    long-term access rights (“IRU”) from Orange (in charge of physical roll-out in these areas),
    and rent them to retail operators, of which Bouygues Telecom as anchor tenant.
    Position: Lead Data Engineer.
    • Answering and predicitng business needs with data analytics
    • Work as a key member of an agile development team utilising Scrum based methodologies and
    tools
    • Analyzed the business requirements and translate them into technical specifications that can be
    used by developers to implement new features or enhancements.
    • Influence the direction of development in order to assist feature enrichment and platform growth
    • Developed and implemented data pipelines using AWS services such as Kinesis, S3, EMR,
    Athena, Redshift to process petabyte-scale data in real time.
    • Designed and developed scalable AWS solutions using Scala/Spark for storing and processing
    large amounts of data across multiple regions.
    • Provided support during all phases of development including design, implementation, testing,
    deployment and maintenance of applications/services.
    • Participated in cross-functional teams (e.g., infrastructure engineering, Web Team) when required
    to ensure effective communication between groups with overlapping functionality or shared
    resources.
    • Developing new features and extending existing data platform using Python and Scala/spark a
    range of deployment automation and monitoring tools
    • Support and coaching of software developers and data engineers through advice, guidance and
    mentoring
    • Review the code of others for accuracy and functionality and to offer guidance for improvement if
    needed
    • Monitor and assist with the deployment of code through test environments towards production and
    the handling of any issues that arise using CloudWatc...

Voir le profil complet de ce freelance