Job Details

Big Data Platform Engineer

SEATTLE-98103, WA, US
11/22/2019

-


Required Skills

    Apache Kafka
Company

Infinity Consulting Solutions, Inc

Experience

7 to 9 Year(s)

Job Description

As a Software Engineer in Data Platforms, you will help design, build and support the underlying data architecture for all the content and consumer data that serve our business, and 120MM+ fans across the globe on digital and broadcast with millisecond latency.

You will be exposed to all phases of software architecture and development and will have the opportunity to work on our real-time, end-to-end data processing pipeline, persistence, storage and API's (which handle millions of transactions every day and store hundreds of terabytes of data).

This is a chance to work with the newest cloud-based data technologies to solve scalability and high availability challenges within a motivated, fast-growing team.

This is a critical role that will have a significant impact on the direction of our product, technology and business.

Responsibilities:

Build software across our entire cutting-edge data platform, including data processing, storage, and serving large-scale web APIs, with awesome cutting-edge technologies

Perform exploratory and quantitative analytics, data mining, and discovery and present to team

Think of new ways to help make data platform more scalable, resilient and reliable, and then work across the team to put your ideas into action

Implement and refine robust data processing, REST services, RPC (in an out of HTTP), and caching technologies

Work closely with data architects, stream processing specialists, API developers, the DevOps team, and analysts to design systems that can scale elastically

Requirements:

7+ years of experience developing with a mix of languages (Java, Scala, Python, SQL, etc.) and frameworks to implement data ingest, processing, and serving technologies

Experience with real-time and very large scalable online systems is preferred

Very knowledgeable in big data framework such as Hadoop & Apache Spark

Very knowledgeable in No-SQL systems such as Cassandra or DynamoDB

Very knowledgeable in streaming technologies such as Apache Kafka

Understanding of reactive programming and dependency injection such as Spring, to develop REST services

Hands on experience with newer technologies relevant to the data space such as Spark, Kafka, Druid (or any other OLAP databases)

Experience with developing in a cloud native environment with many different technologies

Prior working experience building internet scale platforms – handling Peta- byte scale data,
operationalizing clusters with hundreds of compute nodes in cloud environment

Experience with open source such as Spring, Hadoop, Spark, Kafka, Druid, Pilosa and Yarn/Kubernetes

Experience in working and communicating with Data Scientists to operationalize machine learning models

Proficient in agile development methodologies, shipping features every two weeks.


Data Architect
Information Technology

No Preference
Contract Only
Other
1

Candidate Requirements
-
Bachelors

Walkin Information
-
11/12/2019
-

Recruiter Details
Doug Klares
1350 Broadway, Suite 2205, NEW YORK-10018, NY
-